{"id":14232,"date":"2016-04-20T19:25:16","date_gmt":"2016-04-20T23:25:16","guid":{"rendered":"http:\/\/mjtsai.com\/blog\/?p=14232"},"modified":"2016-04-20T19:33:48","modified_gmt":"2016-04-20T23:33:48","slug":"gpuimage-2-redesigned-in-swift","status":"publish","type":"post","link":"https:\/\/mjtsai.com\/blog\/2016\/04\/20\/gpuimage-2-redesigned-in-swift\/","title":{"rendered":"GPUImage 2, Redesigned in Swift"},"content":{"rendered":"<p><a href=\"http:\/\/sunsetlakesoftware.com\/2016\/04\/16\/introducing-gpuimage-2-redesigned-swift\">Brad Larson<\/a> (<a href=\"https:\/\/twitter.com\/bradlarson\/status\/721389150117707777\">tweet<\/a>, <a href=\"https:\/\/news.ycombinator.com\/item?id=11517114\">comments<\/a>):<\/p>\n<blockquote cite=\"http:\/\/sunsetlakesoftware.com\/2016\/04\/16\/introducing-gpuimage-2-redesigned-swift\"><p>The rewritten Swift version of the framework, despite doing everything the Objective-C version does*, only uses 4549 lines of non-shader code vs. the 20107 lines of code before (shaders were copied straight across between the two). That&rsquo;s only 22% the size. That reduction in size is due to a radical internal reorganization which makes it far easier to build and define custom filters and other processing operations. For example, take a look at the difference between the <a href=\"https:\/\/github.com\/BradLarson\/GPUImage\/blob\/master\/framework\/Source\/GPUImageSoftEleganceFilter.m\">old GPUImageSoftEleganceFilter<\/a> (don&rsquo;t forget <a href=\"https:\/\/github.com\/BradLarson\/GPUImage\/blob\/master\/framework\/Source\/GPUImageSoftEleganceFilter.h\">the interface<\/a>) and <a href=\"https:\/\/github.com\/BradLarson\/GPUImage2\/blob\/master\/framework\/Source\/Operations\/SoftElegance.swift\">the new SoftElegance operation<\/a>. They do the same thing, yet one is 62 lines long and the other 20. The setup for the new one is much easier to read, as a result.<\/p>\n<p>* (OK, with just a few nonfunctional parts. See the bottom of this page.)<\/p>\n<p>The Swift framework has also been made easier to work with. Clear and simple platform-independent data types (Position, Size, Color, etc.) are used to interact with the framework, and you get safe arrays of values from callbacks, rather than raw pointers. Optionals are used to enable and disable overrides, and enums make values like image orientations easy to follow.<\/p><\/blockquote>\n<p>Because of open-source Swift, it now supports Linux. On the Mac and iOS side, though, it is surprising that this sort of thing is necessary when Apple provides Core Image. The <a href=\"https:\/\/github.com\/BradLarson\/GPUImage\">original project<\/a> claims:<\/p>\n<blockquote cite=\"https:\/\/github.com\/BradLarson\/GPUImage\"><p>This framework compares favorably to Core Image when handling video, taking only 2.5 ms on an iPhone 4 to upload a frame from the camera, apply a gamma filter, and display, versus 106 ms for the same operation using Core Image. CPU-based processing takes 460 ms, making GPUImage 40X faster than Core Image for this operation on this hardware, and 184X faster than CPU-bound processing. On an iPhone 4S, GPUImage is only 4X faster than Core Image for this case, and 102X faster than CPU-bound processing. However, for more complex operations like Gaussian blurs at larger radii, Core Image currently outpaces GPUImage.<\/p><\/blockquote>","protected":false},"excerpt":{"rendered":"<p>Brad Larson (tweet, comments): The rewritten Swift version of the framework, despite doing everything the Objective-C version does*, only uses 4549 lines of non-shader code vs. the 20107 lines of code before (shaders were copied straight across between the two). That&rsquo;s only 22% the size. That reduction in size is due to a radical internal [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"apple_news_api_created_at":"","apple_news_api_id":"","apple_news_api_modified_at":"","apple_news_api_revision":"","apple_news_api_share_url":"","apple_news_coverimage":0,"apple_news_coverimage_caption":"","apple_news_is_hidden":false,"apple_news_is_paid":false,"apple_news_is_preview":false,"apple_news_is_sponsored":false,"apple_news_maturity_rating":"","apple_news_metadata":"\"\"","apple_news_pullquote":"","apple_news_pullquote_position":"","apple_news_slug":"","apple_news_sections":"\"\"","apple_news_suppress_video_url":false,"apple_news_use_image_component":false,"footnotes":""},"categories":[4],"tags":[220,619,31,30,54,74,440,71,901,97],"class_list":["post-14232","post","type-post","status-publish","format-standard","hentry","category-programming-category","tag-coreimage","tag-graphics","tag-ios","tag-mac","tag-objective-c","tag-opensource","tag-opengl","tag-programming","tag-swift-programming-language","tag-video"],"apple_news_notices":[],"_links":{"self":[{"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/posts\/14232","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/comments?post=14232"}],"version-history":[{"count":2,"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/posts\/14232\/revisions"}],"predecessor-version":[{"id":14234,"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/posts\/14232\/revisions\/14234"}],"wp:attachment":[{"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/media?parent=14232"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/categories?post=14232"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mjtsai.com\/blog\/wp-json\/wp\/v2\/tags?post=14232"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}