{"id":11892,"date":"2015-07-31T12:05:31","date_gmt":"2015-07-31T16:05:31","guid":{"rendered":"http:\/\/mjtsai.com\/blog\/?p=11892"},"modified":"2015-09-25T09:37:09","modified_gmt":"2015-09-25T13:37:09","slug":"bitcode","status":"publish","type":"post","link":"https:\/\/mjtsai.com\/blog\/2015\/07\/31\/bitcode\/","title":{"rendered":"Bitcode"},"content":{"rendered":"<p><a href=\"https:\/\/news.ycombinator.com\/item?id=9726552\">drfuchs<\/a>:<\/p>\r\n<blockquote cite=\"https:\/\/news.ycombinator.com\/item?id=9726552\"><p>I managed to ask Chris Lattner this very question at WWDC (during a moment when he wasn&rsquo;t surrounded by adoring crowds). &ldquo;So, you&rsquo;re signaling a new CPU architecture?&rdquo; But, &ldquo;No; think more along the lines of &lsquo;adding a new multiply instruction&rsquo;. By the time you&rsquo;re in Bitcode, you&rsquo;re already fairly architecture-specific&rdquo; says he.<\/p><\/blockquote>\r\n<p><a href=\"https:\/\/twitter.com\/rentzsch\/status\/611274055098851328\">Wolf<\/a> <a href=\"https:\/\/twitter.com\/rentzsch\/status\/611275367953727488\">Rentzsch<\/a>:<\/p>\r\n<blockquote cite=\"https:\/\/twitter.com\/rentzsch\/status\/611274055098851328\"><p>surprised most dev gripes about bitcode is unpredictable optimization effects. Folks, we&rsquo;ve been living in emulated ISAs since the 90s<\/p><\/blockquote>\r\n<blockquote cite=\"https:\/\/twitter.com\/rentzsch\/status\/611275367953727488\"><p>I think bitcode is a huge win. About only thing I&rsquo;ll miss is cool ISA-specific instructions. Now reliant on OS vendor providing access.<\/p><\/blockquote>\r\n<p><a href=\"https:\/\/twitter.com\/landonfuller\/status\/611275138529492992\">Landon<\/a> <a href=\"https:\/\/twitter.com\/landonfuller\/status\/611275721273536512\">Fuller<\/a>:<\/p>\r\n<blockquote cite=\"https:\/\/twitter.com\/landonfuller\/status\/611275138529492992\"><p>I&rsquo;m a lot less worried about emulated ISAs given that the chances I have to debug one are pretty much nil.<\/p><\/blockquote>\r\n<blockquote cite=\"https:\/\/twitter.com\/landonfuller\/status\/611275721273536512\"><p>Bitcode: non-reproducible Apple-internal toolchain bugs, emergent bugs from undefined behavior that previously worked, etc ...<\/p><\/blockquote>\r\n<p><a href=\"https:\/\/forums.developer.apple.com\/thread\/3971\">dshirley<\/a>:<\/p>\r\n<blockquote cite=\"https:\/\/forums.developer.apple.com\/thread\/3971\"><p>When it becomes a requirement to submit apps in bitcode format, how will this impact architecture specific code (ie. assembly, or anything that is ifdef&rsquo;d for that matter).  It makes sense that assembly isn&rsquo;t converted to bitcode, but doesn&rsquo;t everything need to be in bitcode in order for an archive to be fully encoded in bitcode?  I have an app that&rsquo;s hitting a compile warning when archiving complaining that a specific 3rd party library doesn&rsquo;t contain bitcode so the app cannot be archived with bitcode.  That 3rd party library won&rsquo;t emit bitcode ostensibly because it contains assembly (I could be wong about the cause, though).<\/p><\/blockquote>\r\n<p><a href=\"https:\/\/twitter.com\/rbrockerhoff\/status\/611279350575558657\">Rainer Brockerhoff<\/a>:<\/p>\r\n<blockquote cite=\"https:\/\/twitter.com\/rbrockerhoff\/status\/611279350575558657\"><p>I suppose this would also allow Swift ABIs to change at any time, without dylibs in the app.<\/p><\/blockquote>\r\n<p>See also Accidental Tech Podcast episodes <a href=\"http:\/\/atp.fm\/episodes\/122\">122<\/a>, <a href=\"http:\/\/atp.fm\/episodes\/123\">123<\/a>, and <a href=\"http:\/\/atp.fm\/episodes\/124\">124<\/a>.<\/p>\r\n<p>Update (2015-09-25): <a href=\"http:\/\/lowlevelbits.org\/bitcode-demystified\/\">Alex Denisov<\/a>:<\/p>\r\n<blockquote cite=\"http:\/\/lowlevelbits.org\/bitcode-demystified\/\">\r\n<p>This picture clearly demonstrates how communication between frontend and backend is done using IR, LLVM <a href=\"http:\/\/llvm.org\/docs\/LangRef.html\">has it\u2019s own<\/a> format, that can be encoded using LLVM bitstream file format - <a href=\"http:\/\/llvm.org\/docs\/BitCodeFormat.html\">Bitcode<\/a>.<\/p>\r\n<p>Just to recall it explicitly - <strong>Bitcode is a bitstream representation of LLVM IR<\/strong>.<\/p>\r\n<\/blockquote>\r\n<p><a href=\"https:\/\/medium.com\/@FredericJacobs\/why-i-m-not-enabling-bitcode-f35cd8fbfcc5\">Frederic Jacobs<\/a>:<\/p>\r\n<blockquote cite=\"https:\/\/medium.com\/@FredericJacobs\/why-i-m-not-enabling-bitcode-f35cd8fbfcc5\"><p>Bitcode will enable support for<strong> better <\/strong><a href=\"https:\/\/en.wikipedia.org\/wiki\/Microarchitecture\"><strong>microarchitecture<\/strong><\/a><strong> support <\/strong>but gets nowhere close to target independence. Applications compiled for the armv7 target could still run on armv7s devices but additional optimisations make applications faster if they contain a armv7s slice. The advantage that Bitcode provides on top of app thinning is negligible in my opinion since it will only provide a <em>slight speed up<\/em> until the developer uploads a new build with the optimized slice.<\/p><p>[\u2026]<\/p><p>The <strong>centralization of the building and signing process <\/strong>is what worries me: an adversary could find a vulnerability in the LLVM backend to obtain remote code execution on Apple\u2019s Bitcode compilation infrastructure to inject a compiler trojan that would affect every single app on the App Store that was submitted with Bitcode.<\/p><\/blockquote>","protected":false},"excerpt":{"rendered":"<p>drfuchs: I managed to ask Chris Lattner this very question at WWDC (during a moment when he wasn&rsquo;t surrounded by adoring crowds). &ldquo;So, you&rsquo;re signaling a new CPU architecture?&rdquo; But, &ldquo;No; think more along the lines of &lsquo;adding a new multiply instruction&rsquo;. By the time you&rsquo;re in Bitcode, you&rsquo;re already fairly architecture-specific&rdquo; says he. Wolf [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"apple_news_api_created_at":"","apple_news_api_id":"","apple_news_api_modified_at":"","apple_news_api_revision":"","apple_news_api_share_url":"","apple_news_coverimage":0,"apple_news_coverimage_caption":"","apple_news_is_hidden":false,"apple_news_is_paid":false,"apple_news_is_preview":false,"apple_news_is_sponsored":false,"apple_news_maturity_rating":"","apple_news_metadata":"\"\"","apple_news_pullquote":"","apple_news_pullquote_position":"","apple_news_slug":"","apple_news_sections":"\"\"","apple_news_suppress_video_url":false,"apple_news_use_image_component":false,"footnotes":""},"categories":[4],"tags":[262,1246,31,1137,229,71,901],"class_list":["post-11892","post","type-post","status-publish","format-standard","hentry","category-programming-category","tag-arm","tag-bitcode","tag-ios","tag-ios-9","tag-llvm","tag-programming","tag-swift-programming-language"],"apple_news_notices":[],"_links":{"self":[{"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/posts\/11892","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/comments?post=11892"}],"version-history":[{"count":2,"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/posts\/11892\/revisions"}],"predecessor-version":[{"id":12403,"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/posts\/11892\/revisions\/12403"}],"wp:attachment":[{"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/media?parent=11892"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/categories?post=11892"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/tags?post=11892"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}