Archive for February 2017

Friday, February 24, 2017

Cloudbleed: Cloudflare’s HTTPS Traffic Leak

Tavis Ormandy (via Hacker News):

On February 17th 2017, I was working on a corpus distillation project, when I encountered some data that didn’t match what I had been expecting. It’s not unusual to find garbage, corrupt data, mislabeled data or just crazy non-conforming data...but the format of the data this time was confusing enough that I spent some time trying to debug what had gone wrong, wondering if it was a bug in my code. In fact, the data was bizarre enough that some colleagues around the Project Zero office even got intrigued.

It became clear after a while we were looking at chunks of uninitialized memory interspersed with valid data. The program that this uninitialized data was coming from just happened to have the data I wanted in memory at the time. That solved the mystery, but some of the nearby memory had strings and objects that really seemed like they could be from a reverse proxy operated by cloudflare - a major cdn service.


It turned out that in some unusual circumstances, which I’ll detail below, our edge servers were running past the end of a buffer and returning memory that contained private information such as HTTP cookies, authentication tokens, HTTP POST bodies, and other sensitive data. And some of that data had been cached by search engines.


It turned out that the underlying bug that caused the memory leak had been present in our Ragel-based parser for many years but no memory was leaked because of the way the internal NGINX buffers were used. Introducing cf-html subtly changed the buffering which enabled the leakage even though there were no problems in cf-html itself.

Once we knew that the bug was being caused by the activation of cf-html (but before we knew why) we disabled the three features that caused it to be used. Every feature Cloudflare ships has a corresponding feature flag, which we call a ‘global kill’. We activated the Email Obfuscation global kill 47 minutes after receiving details of the problem and the Automatic HTTPS Rewrites global kill 3h05m later. The Email Obfuscation feature had been changed on February 13 and was the primary cause of the leaked memory, thus disabling it quickly stopped almost all memory leaks.

Adam Clark Estes:

You might not be familiar with Cloudflare itself, but the company’s technology is running on a lot of your favorite websites. Cloudflare describes itself as a “web performance and security company.” Originally an app for tracking down the source of spam, the company now offers a whole menu of products to websites, including performance-based services like content delivery services; reliability-focused offerings like domain name server (DNS) services; and security services like protection against direct denial of service (DDoS) attacks.

Jeff Johnson:

The scandal is not that Cloudflare exposed private info. The scandal is that Cloudflare has access to private info.

Nobody should ever use a third-party HTTPS proxy. You might as well not even use HTTPS. That’s not end-to-end encryption.

1Password’s hosted service uses Cloudflare, but it does use end-to-end encryption:

No secrets are transmitted between 1Password clients and when you sign in and use the service. Our sign-in uses SRP, which means that server and client prove their identity to each other without transmitting any secrets. This means that users of 1Password do not need to change their Master Passwords.

Your actual data is encrypted with three layers (including SSL/TLS), and the other two layers remain secure even if the secrecy of an SSL/TLS channel is compromised.

Dropbox and ChronoSync rely on HTTPS, only encrypting the user data after it gets to the server.

See also: this list of affected sites (via Hacker News).

iOS 10.2.1 Update Reduces Unexpected Shutdowns

Juli Clover:

For the last several months, iPhone 6, 6s, 6 Plus, and 6s Plus users have been dealing with a problem that causes their devices to unexpectedly shut down, an issue that Apple now says it has successfully addressed in the latest iOS 10.2.1 update, released to the public on January 23.

In a statement provided to TechCrunch, Apple says that the iOS 10.2.1 update has resulted in an 80 percent reduction of unexpected shutdowns on the iPhone 6s and a 70 percent reduction of unexpected shutdowns on the iPhone 6.

There’s still something going on. After updating to iOS 10.2.1, I went skiing and within two hours (during which all I did was take one photo) my iPhone went from 100% battery (charged in the car) to unable to turn on. After plugging it in to charge for a few minutes, it booted but showed nearly zero battery.

Previously: Apple’s Support Gap.

Let Your Swift XCTest Methods Throw

Brian King:

One place where the XCTest assertion utilities fall a bit short has been with managing Optional variables in Swift. XCTAssertNotNil doesn’t provide any mechanism for unwrapping variables, easily leading to assertion checks like this[…]


A nice solution is possible, due to an often-overlooked feature of XCTestCase. If the test function is marked with throws, any thrown exception will cause the test to fail. We can use this to fail our tests using normal Swift flow control mechanisms[…]

Unit tests are much more pleasant to read and write if you can get rid of the clutter. There are two key ways that I do this. First, I have an MJTTestCase subclass with convenience methods that wrap the XCTAssert functions. My methods have obscenely short names. For example, XCTAssertTrue() becomes t() and XCTAssertEqual() becomes eq(). Swift’s support for unnamed parameters and overrides really helps here. These are not features that I use much in regular code, but they really come in handy with tests, where there are a small number of methods that are called many times, and I want the focus to be on the parameters rather than the methods themselves.

Second, as King describes, I take advantage of the fact that test methods are now allowed to throw. This is so much better than force unwrapping. My equivalent to his AssertNotNilAndUnwrap() is called unwrap(). It avoids having to write lots of guard statements, either returning the value in the optional or failing the test. If the test fails, it throws, which is how the return type can be T instead of T?. I also have variants like unwrapString(), which also do an as? to check the type.

The same technique works for checking errors. I have ok(), which takes an expression that can throw and fails the test (collecting the line number and error) if it does. If it succeeds, the return value is available for use. I also have e(), which makes sure that an NSError was thrown and returns it so that it can be inspected with further assertions. The XCTest equivalent is XCTAssertThrowsError(), which wants you to pass in an error handler closure. The closure has a number of drawbacks: it causes extra boilerplate and indentation, the closure’s body doesn’t auto-indent properly due to an Xcode bug, and my subclass convenience methods must be accessed through self. Instead, I can simply write:

let error = try e(codeThatShouldThrow())
eq(error.code, NSFileNoSuchFileError)

Previously: Proposal: XCTest Support for Swift Error Handling.

Update (2017-03-05): By request, here’s the source for my unwrap():

func unwrap<T>(_ value: T?, file: StaticString = #file, line: UInt = #line) throws -> T {
    guard let value = value else {
        fail("Unwrapped nil instead of \(T.self)", file: file, line: line)
        throw ExpectedNotNilError()
    return value

Update (2019-10-13): Xcode 11 now includes its own XCTUnwrap (via Bas Broek).

Adventures in Siri Failures: Reminders Edition

Dan Moren:

Now, I use Siri to add stuff to my Shopping List in Reminders all the time, and that generally works fine. But for some reason when I tried to do the same thing here, I kept getting the same error: “Sorry, I can’t add that to your library. You don’t seem to be subscribed to Apple Music.”

This is infuriating, for a few reasons. First: What the hell about this query even remotely suggests that I’m trying to do anything with music? Secondly, I’m not subscribed to Apple Music, so why is that lack of a subscription interfering with something I legitimately want to do? It ticks me off to no end that a feature I don’t even use is interfering with something I want to accomplish.

Also, as I keep saying, Siri seems to be much better at turning speech into text than it is at acting on that text. So why isn’t there a way to make a reminder without having it try (and fail) to interpret the text within that reminder? Put another way, it’s frustrating that it can nail the hard part, but then it throws away the results (and my utterances) because it was reading too much into what should have been the easy part.

Previously: Apple Pushes iPhone 6s Pop-up Ads to App Store.

Thursday, February 23, 2017

SHA-1 Collision

Google (Hacker News):

Today, 10 years after of SHA-1 was first introduced, we are announcing the first practical technique for generating a collision. This represents the culmination of two years of research that sprung from a collaboration between the CWI Institute in Amsterdam and Google. We’ve summarized how we went about generating a collision below. As a proof of the attack, we are releasing two PDFs that have identical SHA-1 hashes but different content.

For the tech community, our findings emphasize the necessity of sunsetting SHA-1 usage. Google has advocated the deprecation of SHA-1 for many years, particularly when it comes to signing TLS certificates. As early as 2014, the Chrome team announced that they would gradually phase out using SHA-1. We hope our practical attack on SHA-1 will cement that the protocol should no longer be considered secure.


This attack required over 9,223,372,036,854,775,808 SHA1 computations. This took the equivalent processing power as 6,500 years of single-CPU computations and 110 years of single-GPU computations.


The SHAttered attack is 100,000 faster than the brute force attack that relies on the birthday paradox. The brute force attack would require 12,000,000 GPU years to complete, and it is therefore impractical.


Basically, each PDF contains a single large (421,385-byte) JPG image, followed by a few PDF commands to display the JPG. The collision lives entirely in the JPG data - the PDF format is merely incidental here. Extracting out the two images shows two JPG files with different contents (but different SHA-1 hashes since the necessary prefix is missing). Each PDF consists of a common prefix (which contains the PDF header, JPG stream descriptor and some JPG headers), and a common suffix (containing image data and PDF display commands).

The header of each JPG contains a comment field, aligned such that the 16-bit length value of the field lies in the collision zone. Thus, when the collision is generated, one of the PDFs will have a longer comment field than the other. After that, they concatenate two complete JPG image streams with different image content - File 1 sees the first image stream and File 2 sees the second image stream. This is achieved by using misalignment of the comment fields to cause the first image stream to appear as a comment in File 2 (more specifically, as a sequence of comments, in order to avoid overflowing the 16-bit comment length field). Since JPGs terminate at the end-of-file (FFD9) marker, the second image stream isn’t even examined in File 1 (whereas that marker is just inside a comment in File 2).

I think SHAttered overstates the impact on Git. Linus Torvalds (2005, via Joe Groff):

I really hate theoretical discussions.

The fact is, a lot of crap engineering gets done because of the question ”what if?”. It results in over-engineering, often to the point where the end result is quite a lot measurably worse than the sane results.

You are literally arguing for the equivalent of “what if a meteorite hit my plane while it was in flight - maybe I should add three inches of high-tension armored steel around the plane, so that my passengers would be protected”.


And the thing is, if somebody finds a way to make sha1 act as just a complex parity bit, and comes up with generating a clashing object that actually makes sense, then going to sha256 is likely pointless too - I think the algorithm is basically the same, just with more bits. If you’ve broken sha1 to the point where it’s that breakable, then you’ve likely broken sha256 too.

He’s being criticized for saying this, but (so far) it looks like he was actually right.

Linus Torvalds (Hacker News):

Put another way: I doubt the sky is falling for git as a sourcecontrol management tool. Do we want to migrate to another hash? Yes. Is it “game over” for SHA1 like people want to say? Probably not.

I haven’t seen the attack details, but I bet

(a) the fact that we have a separate size encoding makes it much harder to do on git objects in the first place

(b) we can probably easily add some extra sanity checks to the opaque data we do have, to make it much harder to do the hiding of random data that these attacks pretty much always depend on.

Previously: MD5 Collision.

Update (2017-02-24): See also: Subversion (ArsTechnica), Mercurial, Bruce Schneier in 2005 and now.

Update (2017-03-09): See also: Linus Torvalds (Hacker News), Jon Gilmore (via Zaki Manian), Matthew Green.

Update (2017-03-16): See also: Linus Torvalds (via Reddit).

Update (2019-05-17): Thomas Peyrin:

Our paper on chosen-prefix collision attack for SHA-1 is out. TL;DR: computing such collision is very practical, for a reasonable cost. More results coming soon. Remove SHA-1 now if you still implement it for any digital signature/certificate use.

Migrating Firefox for iOS to Swift 3.0

Mozilla (via Emily Toop):

A week ago we completed the migration of the entire Firefox for iOS project from Swift 2.3 to Swift 3.0. With over 206,000 lines of code, migrating a project of this size is no small feat. Xcode’s built in conversion tool is a fantastic help, but leaves your codebase in a completely uncompilable state that takes a good long while to resolve.


The first hitch in the plan occurred fairly quickly. Our test targets, despite not importing code from other targets further down the dependency tree, all required our primary target, Client, as the host app in order to run. Therefore our plan to ensure each target robustly passed its tests before moving onto the next target was impossible. We would have to migrate all of the targets, then the test targets and then ensure that the tests pass. This would mean that we may possibly be performing code changes in dependent targets on incorrectly migrated code, which added an extra layer of uncertainty. In addition, being unable to execute the code before moving on would mean that if we made a poor decision when solving a migration issue, that decision may end up proliferating through many targets before we realised that the code change produces a crash.

The second hitch came when migrating some of the larger targets, in particular Storage. Even after all this time, Xcode’s ability to successfully compile Swift is…flaky. When, after performing the auto-conversion, your first build error is a segfault in Xcode, this is not at all helpful. Especially when the line of code mentioned in the segfault stack trace is in an unassuming class that is doing nothing special. And when you comment out all of the code in that class, it still produces a segfault.


It had taken 3.5 engineers, 3 members of QA and 3.5 weeks, but the feeling when we were finally ready to hit merge was jubilant.

They ran into an interesting NSKeyedArchiver issue.

Previously: Getting to Swift 3 at Airbnb.

Update (2017-02-24): Thaddeus Ternes:

The Astra board didn’t change for over two weeks because of the Swift 3 migration. That was two weeks I didn’t get new features or enhancements built, or customer-reported issues fixed. I was simply agreeing to proposed changes by the tools, and then fixing the problems it created along the way.

MagicGrips for Magic Mouse

Chance Miller:

Elevation Lab, the company behind a handful of popular Apple accessories, is today announcing its latest product: MagicGrips. The company says that this accessory is designed to work with the Magic Mouse and makes it easier to grip.

Elevation Lab says that this accessory makes the Magic Mouse more comfortable use by widening your grip and allowing you to squeeze the mouse without it moving upwards. Furthermore, the grip is said to release hand tension.

The Magic Mouse is the least comfortable mouse I have ever used, due to the shape of the edges. This looks like it would help.

(Anyone remember the name of the case that snapped around the original iMac puck mouse to give a more standard shape?)

Previously: Apple’s New Magic Keyboard, Mouse, and Trackpad, Magic Mouse Review.

Update (2017-02-23): I think the iMac mouse adapter that I used was the UniTrap.

Opening the User Library Folder

Rob Griffiths:

Yesterday, I wrote about an apparent change in Finder’s Library shortcut key. To wit, it used to be that holding the Option key down would reveal a Library entry in Finder’s Go menu.

However, on my iMac and rMBP running macOS 10.12.3—and on others’ Macs, as my report was based on similar findings by Michael Tsai and Kirk McElhearn—the Option key no longer worked; it was the Shift key. But on a third Mac here, running the 10.12.4 beta, the shortcut was back to the Option key.


After some experimentation, I was able to discover why the shortcut key changes, and how to change it between Shift and Option at any time. This clearly isn’t a feature, so I guess it’s a bug, but it’s a weird bug.

Update (2017-04-07): Adam C. Engst:

My suspicion is that this weird Finder state, which may date back to the Sierra betas, can be triggered in ways other than relaunching the Finder. Quitting or force-quitting the Finder from within Activity Monitor doesn’t seem to do it, but I can imagine other scenarios that might leave the Finder in an unusual state — a kernel panic, for instance, or a loss of power to the Mac. Over years of usage, it’s easy to see something like this happening to many people.

Video Pros Moving From Mac to Windows for High-End GPUs

Marco Solorio (May 2016):

But as good as that juiced up Mac Pro Tower is today, I know at some point, the time will have to come to an end, simply because Apple hasn’t built a PCIe-based system in many years now. As my article described, the alternative Mac Pro trashcan is simply not a solution for our needs, imposing too many limitations combined with a very high price tag.

The Nvidia GTX 1080 might be the final nail in the coffin. I can guarantee at this point, we will have to move to a Windows-based workstation for our main edit suite and one that supports multiple PCIe slots specifically for the GTX 1080 (I’ll most likely get two 1080s that that new price-point).


Even a Thunderbolt-connected PCIe expansion chassis to a Mac Pro trashcan wont help, due to the inherent bandwidth limits that Thunderbolt has as compared to the buss speeds of these GPU cards. And forget about stacking these cards in an expansion chassis… just not going to happen.

Via John Gruber:

This may be a small market, but it’s a lucrative one. Seems shortsighted for Apple to cede it.

Timo Hetzel:

Moving my video workflow to a modern PC could save me an estimated 4-8 hours every week. I wonder if Apple knows/cares.

Previously: Getting a New 2013 Mac Pro in 2017, How Apple Alienated Mac Loyalists.

Update (2017-02-24): See also: Hacker News.


This has been an ongoing problem since the summer. Some have reverted back to using several 9xx cards (which have spiked in price) while others have switched platforms. Lacking any real progress on this, I would suspect many in this situation would abandon OSX permanently by the end of the year. And if you give up OSX on your desktop, the incentive to stay in that environment on your laptop, tablet, and phone go way down.

This is a serious problem and the only outcomes are either a) Nvidia GPUs are supported, or b) OSX is abandoned, because the simple fact is that Nvidia GPUs are more important long-term than the entire sum of Apple’s hardware; I can replace a tablet or desktop or laptop, but I can’t replace a Pascal TITAN X.

Update (2017-02-25): See also: Reddit.

Update (2017-03-06): Owen Williams (via Jeff Johnson, Hacker News):

I’m a developer, and it seems to me Apple doesn’t pay any attention to its software or care about the hundreds of thousands of developers that have embraced the Mac as their go-to platform.


It took me months to convince myself to do it, but I spent weeks poring over forum posts about computer specs and new hardware before realizing how far ahead the PC really is now: the NVIDIA GTX 1080 graphics card is an insane work-horse that can play any game — VR or otherwise — you can throw at it without breaking a sweat.

I realized I’m so damn tired of Apple’s sheer mediocrity in both laptops and desktops, and started actually considering trying Windows again.

See also: The Talk Show.

Update (2017-03-22): Owen Williams:

After waiting eagerly for the MacBook Pro refresh, then being utterly disappointed by what Apple actually shipped — a high-end priced laptop with poor performance — I started wondering if I could go back to Windows. Gaming on Mac, which initially showed promising signs of life had started dying in 2015, since Apple hadn’t shipped any meaningful hardware bumps in years, and I was increasingly interested in Virtual Reality… but Oculus dropped support for the Mac in 2016 for the same reasons.


It took me months to convince myself to do it, but I spent weeks poring over forum posts about computer specs and new hardware before realizing how far ahead the PC really is now: the NVIDIA GTX 1080 graphics card is an insane work-horse that can play any game — VR or otherwise — you can throw at it without breaking a sweat.


I don’t say this lightly, but Windows is back, and Microsoft is doing a great job. Microsoft is getting better, faster at making Windows good than Apple is getting better at doing anything to OS X.


However, in pursuit of the continual shrinking and lightening of the product line, the gap between the specs available from Apple and the major PC vendors in the workstation category has finally reached the point where even Apple loyalists are taking notice. We’ll see what Apple releases over the next few months (and years), but as I write this, compared to the MacBook Pro, portable workstations from the major PC vendors can be configured with faster processors, four times as much system AND video RAM, as well as more (and upgradeable) storage. As compared to the Mac Pro, desktop workstations from the PC vendors can be configured with more than three times the number of processor cores, sixteen times as much RAM, and double the number of (more powerful and replaceable) video cards. Compare these specs to the iMac, and the gap is even larger.

Wednesday, February 22, 2017

Overcast 3

Marco Arment (tweet):

Previously, tapping an episode in the list would immediately begin playback. This is nice when you want it, but accidental input was always an issue: I found it too easy to accidentally begin playing something that I was trying to rearrange, delete, or see info about.


Some kind of “Up Next”-style fast queue management has been one of Overcast’s most-requested features since day one. It took me a long time to come around to the idea because I thought my playlists served the same role.


Google provides an extensive control panel that lets you block certain ad categories. Most are clearly placed in Sensitive Categories and were easily disabled before launch, like gambling, drugs, etc., but I kept hearing from customers who’d seen other ads that offended both of us.


No closed-source code will be embedded in Overcast anymore, and I won’t use any more third-party analytics services. I’m fairly confident that Apple has my back if a government pressures them to violate their customers’ rights and privacy, but it’s wise to minimize the number of companies that I’m making that assumption about.

Fortunately, the Google ads made relatively little — about 90% of Overcast’s revenue still comes from paid subscriptions, which are doing better now. The presence of ads for non-subscribers is currently more important than the ads themselves, so I can replace them with pretty much anything. So I rolled my own tasteful in-house ads with class-leading privacy, which show in the Now Playing and Add Podcast screens[…]

I really like the interface refinements in this version. I didn’t use episode playlists much before because it was so awkward to add to them. Now it’s easy.

Triaging episodes is much easier now, too. It’s easier to read the summary, I don’t accidentally play the episode when I just wanted to see its info, and no more swipe-to-delete, which had been slow and not fully reliable.

I think it still needs some work for handling unfinished episodes, though. If I don’t add them to a playlist before starting them, it’s too easy to switch to another episode, lose my place, and forget to go back. I would like to see either a history view or the return of the In Progress smart playlist.

See also: Steven Aquino, John Gruber, Federico Viticci (tweet).

Previously: Twitter Sells Fabric to Google.

Update (2017-02-23): Jason Snell:

Now, with Overcast 3, I have a different approach. I now have two playlists. One, called Priority Playlist, basically functions as my play queue. That’s the stuff I will definitely listen to if I have the time, ordered in a way to keep me happy. A few of my must-listen podcasts add their episodes to this playlist automatically, but most don’t.

Setting my playlist settings (left) and adding an item to my playlist (right). The second playlist is called All Episodes, and as the name would imply, it shows every podcast episode from every podcast I subscribe to, with the newest episodes at the top. From this list, I can scroll to see what’s new and if anything pops up as an immediate must-listen. When I find such an episode, I tap once to reveal Overcast 3’s new episode-action strip, tap the Add icon, and then tap “Add to Priority Playlist” or, if I’m really excited, “Play Next.”

Update (2017-02-24): See also: Accidental Tech Podcast, The Talk Show, Under the Radar.

Google Site Search Discontinued

Barb Darrow (Hacker News):

This spring, Google plans to discontinue Google Site Search, a product it has sold to web publishers that wanted to apply the industry’s leading search technology to their own sites.


Once a customer’s allocation of search queries is exhausted, the account will “automatically convert” to the company’s Custom Search Engine, or CSE for short.


CSE is a free, advertising-supported version of Google’s search technology, that provides similar features and functions to GSS, according to the email.

This is disappointing. The e-mail that I received seemed to suggest that I should look into Google Cloud Search, but that’s a totally different product. To provide a search engine for my Web site, I would need to switch to CSE. Years ago, I switched from CSE to GSS because I wanted a better user experience and no ads. CSE devotes much more of the page to ads than a regular Google search; on my 30-inch display, the actual search results from my site start more than halfway down. Now, Google apparently would rather show ads than let me keep paying for GSS.

I’m not sure yet what I’ll do. I have been using DuckDuckGo’s search for this blog, but when I tried it on the C-Command site the results were much worse (less relevant and incomplete) than Google’s. However, that was a while ago, so perhaps it’s better now.

See also: Barry Schwartz.

Update (2017-02-22): There are also changes to CSE.

Update (2018-07-20): Google (via John Gordon):

We are excited to announce an expansion of our Custom Search Engine offerings. We offer the following implementation options for Custom Search Engine.

OmniOutliner Essentials

Ken Case (tweet):

In OmniOutliner’s new Essentials edition, your entire focus is on your own content: there are no distracting sidebars or panels. You can choose to work in a window or in a distraction-free full-screen mode, selecting from a set of beautiful built-in themes. As you write, you’ll be able to see some key statistics about your content so you can track progress towards your goals. But our goal is to help you focus on your content and whatever task you’re working on—not on the tool you’re using.

With the Essentials edition, we’ve lowered OmniOutliner’s entry price from $49.99 to an extremely affordable $9.99.

Brent Simmons:

MORE was by Living Videotext, which was Dave Winer’s company. Later I went to work at Dave’s company UserLand Software, which also included an outliner in its app Frontier, which I worked on. So there is a sort-of family tree connection from OmniOutliner back to MORE.

Thunderbolt 3 and USB-C Infographic

Lloyd Chambers:

The infographic from OWC shown below might help in some ways, but there are various “gotchas”. MPG recommends generally buying full-speed Thunderbolt 3 cables, for maximum interoperability. However, lower speed cables intended for use with USB-C have their place also.

Monday, February 20, 2017

Provide Custom Collections for Dictionary Keys and Values


This proposal address two problems:

  • While a dictionary’s keys collection is fine for iteration, its implementation is inefficient when looking up a specific key, because LazyMapCollection doesn’t know how to forward lookups to the underlying dictionary storage.
  • Dictionaries do not offer value-mutating APIs. The mutating key-based subscript wraps values in an Optional. This prevents types with copy-on-write optimizations from recognizing they are singly referenced.


Dictionary values can be modified through the keyed subscript by direct reassignment or by using optional chaining. Both of these statements append 1 to the array stored by the key "one":

// Direct re-assignment
dict["one"] = (dict["one"] ?? []) + [1]

// Optional chaining

Both approaches present problems.

The proposed solution is an improvement but still seems a bit awkward.

Update (2017-02-20): Airspeed Velocity:

There are other changes likely for Swift 4 that would make this less awkward (e.g. to supply a default value when subscripting).

It’s just that this particular change was ABI impacting (changes the type of Dictionary.Value) so was proposed during stage 1.

Dash 4

Kapeli (tweet):

Docset Playgrounds – Most docsets now show “Play” buttons which let you quickly test snippets of code


Search Using Selected Text – This feature has been completely remade and is now more reliable, even in apps that don’t support system services.


Tab Improvements – You can now reopen the last closed tab, duplicate tabs and close all tabs except for the selected one


You can now copy the external URL of documentation pages for easier sharing

To make the Play button show up for Apple’s documentation, make sure that the language is set to Swift.

Ruby’s reject!

Accidentally Quadratic (Hacker News):

The code used to be linear, but it regressed in response to bug #2545, which concerned the behavior when the block passed to reject! executed a break or otherwise exited early. Because reject! is in-place, any partial modifications it makes are still visible after an early exit, and reject! was leaving the array in a nonsensical state. The obvious fix was to ensure that the array was always in a consistent state, which is what resulted in the “delete every time” behavior.

I find this interesting as a cautionary tale of how several of Ruby’s features (here, ubiquitous mutability, blocks, and nonlocal exits) interact to create suprising edge cases that need to be addressed, and how addressing those edge cases can easily result in yet more problems (here, quadratic performance).

Saturday, February 18, 2017

The State of iBooks in Early 2017

Michael E. Cohen:

iBooks is not quite as unreliable and confusing as it was when I wrote about it last year, but neither has it improved nearly as much as loyal iBooks users deserve. Moreover, what little support documentation Apple provides is sketchy and inaccurate, leaving the impression that even the support and documentation departments within Apple are ignoring iBooks.


Now and then, the Library window in iBooks on the Mac gets confused, showing books that aren’t there, or duplicating thumbnails for books that are there. A couple of days ago, I found two thumbnails for a short EPUB I had just uploaded. The second thumbnail was a phantom, and actually represented the next book in the window, which I only discovered by right-clicking the duplicate. A simple cosmetic problem you might think, but if I had attempted to delete the phantom copy, I would have trashed an entirely different book!

Previously: I Wish Apple Loved Books.

Swift Ownership Manifesto

Apple (mailing list):

The widespread use of copy-on-write value types in Swift has generally been a success. It does, however, come with some drawbacks:

  • Reference counting and uniqueness testing do impose some overhead.

  • Reference counting provides deterministic performance in most cases, but that performance can still be complex to analyze and predict.

  • The ability to copy a value at any time and thus “escape” it forces the underlying buffers to generally be heap-allocated. Stack allocation is far more efficient but does require some ability to prevent, or at least recognize, attempts to escape the value.

Certain kinds of low-level programming require stricter performance guarantees. Often these guarantees are less about absolute performance than predictable performance. For example, keeping up with an audio stream is not a taxing job for a modern processor, even with significant per-sample overheads, but any sort of unexpected hiccup is immediately noticeable by users.

Another common programming task is to optimize existing code when something about it falls short of a performance target. Often this means finding ”hot spots” in execution time or memory use and trying to fix them in some way. When those hot spots are due to implicit copies, Swift’s current tools for fixing the problem are relatively poor; for example, a programmer can fall back on using unsafe pointers, but this loses a lot of the safety benefits and expressivity advantages of the library collection types.

We believe that these problems can be addressed with an opt-in set of features that we collectively call ownership.

Previously: Chris Lattner ATP Interview, Swift Plans.

Dart-C USB-C Laptop Charger

David Pogue:

The Dart-C, billed as the world’s smallest laptop charger. And it really is tiny.

Yet somehow, it provides 65 watts—plenty for laptops like the 12- and 13-inch MacBooks, the Lenovo ThinkPad 13, ASUS ZenBook 3, Dell XPS 13, and so on. Really honking laptops, like the 15-inch MacBook Pro, expect more wattage (85). This charger will work on those machines—just not as fast.

How do I love this thing? Let us count the ways.

  • It has a standard USB jack embedded in the cable. That means that you can simultaneously charge your phone, tablet, camera, or whatever—with no slowdown in charging your primary gadget.
  • It has an indicator light that lets you know if you’re plugged into a working outlet. (Apple’s chargers no longer have a status light.)

It’s even more expensive than Apple’s charger, though.

Fixing (and Explaining) PDFpen 8.3.1’s Crash on Launch

Adam C. Engst:

Greg said that the reason PDFpen crashed — even before it actually launched — was because Smile’s developer signing certificate from Apple had expired.


In the past, the expiration of a code signing certificate had no effect on already shipped software. PDFpen 6.3.2, which Smile still makes available for customers using OS X 10.7 Lion, 10.8 Mountain Lion, and 10.9 Mavericks, is signed with a certificate that expired long ago, and it has no trouble launching.

What’s new with PDFpen 8 is that, in addition to being code signed, it has a provisioning profile, which is essentially a permission slip from Apple that’s checked against an online database in order to allow the app to perform certain actions, called entitlements. For PDFpen, the entitlement that’s being granted is the capability to access iCloud despite being sold directly, rather than through the Mac App Store, a feature that wasn’t possible until about a year ago.

It sounds like every Developer ID app that uses iCloud has a built-in time bomb. Something is not designed properly here. First, why does Apple issue certificates with relatively short expiration dates? They already have a means of revoking certificates in the event of a problem. Second, why does the OS check whether the code signing certificate is valid now, as opposed to when the provisioning profile was signed?

Previously: CloudKit and Map Kit for Gatekeeper Apps, More Mac App Store Certificate Problems, WWDR Intermediate Certificate Expiration.

Update (2017-02-20): See also: MacRumors, Acqualia.

Update (2017-02-22): Rick Fillion:

Due to the expired Provisioning Profile, 1Password mini wouldn’t launch. And without mini running, 1Password itself was unable to startup successfully. Both mini and 1Password itself were signed with the same Developer ID certificate. Gatekeeper allowed 1Password to run, but due to the different rules for apps with provisioning profiles, it would not allow mini to run.

As far as we can tell, the only way to correct this problem is to provide a new build of the app with an updated provisioning profile with a new expiration date.


When we generated our updated provisioning profile we also needed to generate a new Developer ID certificate. We didn’t realize it at the time, but the common name of newly created certificates now include the team identifier in addition to the company name; “Developer ID Application: AgileBits Inc. (2BUA8C4S2C)” vs. “Developer ID Application: AgileBits Inc.”. Close. Super close. But we weren’t looking for a “close” match.

Rick Fillion:

In case you’re wondering how to tell when a provisioning profile will expire you can run security cms -D -i on the Terminal to have it output information about a profile.

Update (2017-02-24): Rob Griffiths:

Follow me now, if you wish, for a somewhat deep dive into the world of code signing, as I attempt to explain—from a consumer’s perspective yet with a developer’s hat on—what is code signing, why these apps broke, why the breakage wasn’t expected, and other related questions and answers.


Follow me now, if you wish, for a somewhat deep dive into the world of code signing, as I attempt to explain—from a consumer’s perspective yet with a developer’s hat on—what is code signing, why these apps broke, why the breakage wasn’t expected, and other related questions and answers. Apple explicitly tells developers—in at least two places—that they only need these certificates to build new apps or update existing apps. So if you have a certificate that’s set to expire, but you don’t have an urgent need to update an app or create a new app, it’s supposed to be a non-event.


So the app can’t tell the user what’s going on, and even worse, the OS doesn’t tell the user what’s going on: The app just seemingly dies in an instant. That’s bad for the user, and bad for the developer, because they’re blamed for something they didn’t even know was occurring (because their app never loads, and no crash reports are generated). If the OS is going to kill the app, the OS should tell the user why it did so, so the user has some understanding about the problem.

Update (2017-03-09): Greg Scown:

Looks like Apple has addressed the Provisioning Profile issue.

Thursday, February 16, 2017

Deferring ABI Stability From Swift 4

Ted Kremenek (tweet, Reddit):

Given the importance of getting the core ABI and the related fundamentals correct, we are going to defer the declaration of ABI stability out of Swift 4 while still focusing the majority of effort to get to the point where the ABI can be declared stable.

To allow the community to follow along with this effort, an ABI dashboard will get wired up from the swift-evolution home page that will present a table of main ABI tasks remaining and what Swift release they landed in.


The Swift 4 compiler will provide a source-compatibility mode to allow existing Swift 3 sources to compile, but source-breaking changes can manifest in “Swift 4” mode. That said, changes to fundamental parts of Swift’s syntax or standard library APIs that breaks source code are better front-loaded into Swift 4 than delayed until later releases. Relative to Swift 3, the bar for such changes is significantly higher[…]

Swift 4 Release Process:

Swift 4 is a major release that is intended to be completed in the fall of 2017. It pivots around providing source stability for Swift 3 code while implementing essential feature work needed to achieve binary stability in the language. It will contain significant enhancements to the core language and Standard Library, especially in the generics system and a revamp of the String type.


The intended design is that a project containing multiple Swift modules, such as an Xcode project with multiple Swift targets, will be able to adopt into the specific Swift language mode on a per module (target) level and that they can freely interact within the same compiled binary. Note that this interoperability only exists at the binary level when the targets are compiled with the same compiler.

Postponing ABI stability (again) makes sense, especially given that String is still in flux.

Previously: Chris Lattner ATP Interview, Swift 4 String Manifesto, ABI Stability Deferred Until After Swift 3.0, Looking Back on Swift 3 and Ahead to Swift 4.

Being a Mutable Collection is not Sufficient to be a MutableCollection

Ole Begemann:

A MutableCollection supports in-place element mutation. The single new API requirement it adds to Collection is that the subscript now must also have a setter.


MutableCollection allows changing the values of a collection’s elements, but the protocolʼs documentation stipulates that the mutation must neither change the length of the collection nor the order of the elements. Set canʼt satisfy either of these requirements.


All Dictionary would gain from conforming to MutableCollection and/or RangeReplaceableCollection would be methods that operate on Index values and (Key, Value) pairs, which is probably not compelling enough to invest anything in the conformance even if it were compatible with the typeʼs implementation.

Ole Begemann:

CharacterView does conform to RangeReplaceableCollection but not to MutableCollection. Why? A string is clearly mutable; it seems logical that it should adopt this protocol. Again, we need to consider the protocolʼs semantics.


However, the Characterʼs size in the underlying storage is not the same for all characters, so replacing a single Character can potentially make it necessary to move the subsequent text forward or backward in memory by a few bytes to make room for the replacement. This would make the simple subscript assignment potentially an O(n) operation, and subscripting is supposed to be O(1).


The final potential issue for CharacterViewʼs hypothetical MutableCollection conformance is Unicode and the complexities it brings. The existence of combining characters means that replacing a single Character can actually change the stringʼs length (measured in Characters) if the new character combines with its preceding character.

Previously: Swift 4 String Manifesto.

Apple Fighting New “Right to Repair” Legislation

Ben Lovejoy (Hacker News):

Apple is fighting ‘right to repair’ legislation which would give consumers and third-party repair shops the legal right to purchase spare parts and access service manuals. The state of Nebraska is holding a hearing on the proposed legislation next month, and Motherboard reports that Apple will be formally opposing the bill.

[…] hopes that getting a single state to pass a right to repair bill will result in manufacturers giving in, citing the precedent of similar legislation in the car industry.

Previously: Error 53.

Great Alternatives to Hamburger Menus

Kara Pernice and Raluca Budiu:

Hidden navigation, such as the hamburger menu, is one of the many patterns inspired by mobile designs. Screen space is a precious commodity on mobile. To meet the challenge of prioritizing content while still making navigation (and other chrome) accessible, designers commonly rely on hiding the navigation under a menu — often indicated by the infamous hamburger icon. Like a cheap fast food chain, it got designers addicted to its convenience, and now serves millions each day, both on mobile devices and on desktops.

While our qualitative user testing has repeatedly shown that navigation hidden under a drop-down menu is less discoverable on the desktop, we wanted to measure the size of this effect in a quantitative study and assess the relative impact of hidden navigation on the desktop versus mobile.

Mobiscroll (via Andy Bargh):

Hamburger menus drive engagement down, slow down exploration and confuse people. If you are reading this, it won’t confuse you, but it damn will confuse others who might be happy to consume your content.


I cannot stress this enough. Always design with real content, otherwise you’ll end up with placeholders, lorem ipsums and hamburger menus inside hamburger menus. Content on its own doesn’t make sense, and layouts without content either.

Previously: Apple on Hamburger Menus, The Hamburger Menu Doesn’t Work, Ex-Microsoft Designer Explains the Move Away From Metro, Hamburgers and Basements.

Kindle Direct Paperbacks

Amazon (via Matt Henderson):

We’re excited to offer the opportunity to publish paperbacks in addition to Kindle eBooks. We’ll be adding even more print-related features in the future, like proof copies, author (wholesale) copies, and expanded distribution to bookstores and non-Amazon websites. CreateSpace still offers these features, and KDP will offer them as well.

Publishing a paperback can help you reach new readers. KDP prints your book on demand and subtracts your printing costs from your royalties, so you don’t have to pay any costs upfront or carry any inventory.

What Happened With the Apple TV 4

Mark Gurman (tweet):

Apple doesn’t disclose how many Apple TVs it sells, but Chief Financial Officer Luca Maestri acknowledged in a recent interview that sales decreased year-over-year from the 2015 holiday season to this past 2016 holiday period. The research firm eMarketer says the fourth-generation Apple TV has steadily lost market share since its release in the fall of 2015; in January just 11.9 percent of connected television customers were using it, the research firm says, down from 12.5 percent in September. In part, the slide reflects competition from Amazon and Roku, whose boxes do the same and more for less money.


Apple had a backup plan if it wasn’t able to replace the existing cable box—the much-ballyhooed “skinny bundle,” a stripped down web service that would let viewers choose channels rather than paying for ones they don’t watch. Apple proposed bundling the four main broadcast networks and a handful of cable channels as well as on-demand TV shows and movies for $30 to $40 a month. The media companies were willing to engage with Apple due to concerns about the rise of online services like Netflix and the cord-cutting phenomenon.

But the two sides stumbled over cost, the composition of the bundles and negotiating tactics. The media companies blamed Apple’s arrogance; Apple blamed the media companies’ inflexibility. In the end, the talks fell apart, leaving Apple to tout stripped-down bundles from Sony PlayStation and DirecTV. After the negotiations foundered, Apple’s hardware team ditched the coaxial port.

Update (2017-02-18): Dan Moren:

It’s tough to figure out what that extra cash buys you, beyond the Apple brand. The previous version of the Apple TV, which was less powerful though still perfectly capable at its primary function of streaming video, cost a more comparable $99. Frankly, though I use the Apple TV every day, I’d be hard-pressed to find $50 more worth of functionality on the new model. (Most of that cost increase is probably attributable to the Siri remote, which I have mostly ditched in favor of a harder-to-lose and more user-friendly Logitech Harmony universal remote.)


We talk a lot about the “second screen” experience of people sitting in front of the TV: how they’re checking Twitter or looking actors up on IMDb or Wikipedia while watching. But when it comes to apps, it’s not the iPhone or iPad or MacBook that’s the second screen—it’s the Apple TV.

Wednesday, February 15, 2017

Grand Central Dispatch’s Achilles Heel

Wil Shipley (tweet):

I don’t know much about the internals of GCD so I can’t speak with authority, but it seems like this could be solved with a couple of minor changes to sync(): figure out if the destination queue is the current queue, and if so just execute the submitted block immediately and return. This wouldn’t even be a source or binary-breaking change, because, again, the current behavior is HANG the app.

And, in fact, this is the workaround third-party programmers have made for the last several years. If you do a Google search for dispatch_get_current_queue [now deprecated] you’ll see a bunch of developers complaining about that call disappearing because they were using it for this hack.


Hopefully you’re as horrified by this mess as I am. This is the very model of spaghetti code. Last week I ported this file from macOS 10.8 to 10.12 and honestly I still couldn’t come up with a good way to re-architect it. I’m bending over backwards to interact with the main thread in multiple places in this codebase and I’m not sure if I’m on the main thread or not and it’s a nightmare.

Greg Parker:

dispatch_get_current_queue() == someQueue is insufficient to avoid the deadlocks you describe.

Rob Napier:

Interestingly, they did fix this in CoreData on top of GCD. performAndWait is reentrant. But unsure how they implemented.

Reverse Engineering

Alex Denisov:

The task becomes easy since the six colors are hardcoded in the binary. I just need to find where exactly and change the values to ones I like more.


What is important here: three consecutive values starting at 0x10000c790 moved to xmm_ registers. I’m not 100% sure, but I’d assume that they are used to pass parameters into colorWithDeviceRed:green:blue:alpha:.


Now I know where the colors reside in code. I need to find them in the binary. The address of a first color component is 0x10000c790. To find its on-disk address I need to subtract a base address from it. The base address can be obtained via LLDB as well.


Now I can use xxd with -s (--seek) and -l (--length) parameters to get exactly 8 bytes at a given address.

Switch 2FA From SMS to an App

Laura Shin (via David Heinemeier Hansson):

“So I called the company to make sure I hadn’t forgotten to pay my phone bill, and they said, you don’t have a phone with us. You transferred your phone away to another company,” he says. A hacker had faked his identity and transferred his phone number from T-Mobile to a carrier called Bandwidth that was linked to a Google Voice account in the hacker’s possession. Once all the calls and messages to Kenna’s number were being routed to them, the hacker(s) then reset the passwords for Kenna’s email addresses by having the SMS codes sent to them (or, technically, to Kenna’s number, newly in their possession). Within seven minutes of being locked out of his first account, Kenna was shut out of of up to 30 others, including two banks, PayPal, two bitcoin services — and, crucially, his Windows account, which was the key to his PC.


Last summer, the National Institutes of Standards and Technology, which sets security standards for the federal government, “deprecated” or indicated it would likely remove support for 2FA via SMS for security. While the security level for the private sector is different from that of the government, Paul Grassi, NIST senior standards and technology advisor, says SMS “never really proved possession of a phone because you can forward your text messages or get them on email or on your Verizon website with just a password. It really wasn’t proving that second factor.”

Usher Will Be Stepping Aside

Many Tricks:

QuickTime is very old, and obviously no longer updated. (It’s so old that it’s not even 64-bit code.) Newer video formats may cause issues, and we can’t resolve those issues in Usher because they’re actually in QuickTime. Given these age-related issues with QuickTime, we’re no longer comfortable selling and supporting Usher to new buyers, so we’ve decided it’s retirement time.


Beyond the market size, we can’t just delete “old QuickTime” and insert “QuickTime X” and be done with it. The two are very different, so much so that we’d need to totally rewrite the engine that drives Usher. And that’s a huge job…and one that wouldn’t ever be paid back in sales, due to the limited market size.

Previously: The Curious Case of QuickTime X, What Is Apple Doing With QuickTime?, AV Foundation and the Void.

Swift and Objective-C Forever?

Jeff Johnson (Hacker News):

When Swift became public in 2014, its creator Chris Lattner seemed to claim that Swift and Objective-C would coexist indefinitely.


The problem is that nobody believes this. And of course Lattner has now left Apple, so he won’t be there to take the criticism is his claim turns out to be false. The consensus among developers is that Apple will eventually deprecate Objective-C, and Swift will become the sole first class language for Cocoa app development.


You only have to review the history of Apple developer relations to see the long string of deprecations, disappointments, suffering, and broken promises. Objective-C garbage collection, 64-bit Carbon, the Cocoa-Java bridge, Yellow Box for Windows, Dylan. Need I go on? I could go on. Apple evangelists will tell you that Swift is the best programming language ever and then turn around and tell you that we’ve always been at war with Swift.


The people who think Apple will deprecate Objective-C, how do they think Apple can handle it? Some people suggest that Apple will deprecate Objective-C externally, but they will continue Objective-C development internally and indefinitely. However, I think these people underestimate the problem. Given the amount of Objective-C code Apple has, and the constraints they’re working under, taking the slow road internally to a Swift future would be a very slow road indeed.

I don’t see Apple dropping Swift. The technology seems to be sound, and Apple has really put its reputation on the line in a way that it didn’t with the other canceled projects. Plus, unlike those, Swift already has massive adoption outside of Apple.

That said, I expect that Objective-C, while not being deprecated, will decline in popularity the same way that Carbon—also officially a first-class coexister—did. I don’t think that Apple will maintain sample code parity for much longer.

Johnson is right that there are many open questions about how Apple will manage this transition—if that’s what it is—both internally and externally. It will affect the OS itself, Apple’s apps, the public APIs, and Apple’s own staffing. It’s got to be difficult for Apple to hire and retain WebObjects programers for its internal services, and it could face similar issues, to a lesser extent, if developers continue to switch to Swift.

Tuesday, February 14, 2017

On the Uselessness of Search in macOS Mail

Rob Griffiths:

For the last couple macOS releases, I’ve had nothing but trouble searching in Mail. Note that I didn’t write “trouble searching mail,” but rather, “trouble searching in Mail.”

Searching Spotlight directly works for him.

Dan Frakes:

Is there a secret to getting macOS Mail’s “Unread” smart mailbox to show 0 messages when there are no unread messages?

I see both of these issues all the time, though with the Flagged smart mailbox rather than Unread. The number next to the Flagged mailbox is different from the number of messages shown in that mailbox, and some of the messages shown in it are not actually flagged.

As a quick fix to prevent it from showing messages that are not actually flagged, I was able to use this Terminal command:

sqlite3 ~/Library/Mail/V4/MailData/Envelope\ Index 'SELECT * FROM subjects WHERE rowid IN (SELECT subject FROM messages WHERE flagged="1")'

After confirming that these really are messages that should not be flagged, I could mark the messages as unflagged:

sqlite3 ~/Library/Mail/V4/MailData/Envelope\ Index 'UPDATE messages SET flagged="0" WHERE flagged="1"'

(Caveat: I do not use multi-colored flags and have not investigated how this affects them.)

This would only provide temporary relief, though. Rebuilding Mail’s database takes longer but also lasts longer. But because of this and other problems that wouldn’t fully go away, I did a clean reinstall of macOS about a week ago. So far so good, but I suspect that both problems will be back.

Fortunately, most of the mail that I need to search is in FogBugz and EagleFiler.

Update (2017-02-15): Rob Griffiths:

If I move all the messages from an inbox or local storage folder into a different local storage folder, they’ll be indexed and findable. I can then move them back into the inbox or source folder, and they remain findable.

Twitterrific for Mac Kickstarter

Sean Heber (MacRumors):

After much consideration, we decided that the best way forward was to go back to the beginning. Rather than bending the long-neglected Twitterrific for Mac into a new shape, we will borrow what we can from iOS and use it to build a modern new macOS app.


We’re confident that we can do this, but we need your help! Please check out our Kickstarter page, watch the video and study the plan. There are many different funding levels including regular access to beta builds all through Phoenix’s lifespan. If you’re the kind of person who loves to see software evolve through it’s development, or just want to start using a new Twitterrific on your desktop sooner rather than later, this one is for you.

This is our first Kickstarter project and a new way for us to fund our software development. The main reason the Mac app languished is because we aren’t sure that there’s a market for a desktop social networking product (it’s easy to make a case that all our social activities have moved to mobile.) For our small software company, the risk of recouping development costs was just too high. Kickstarter removes this unpredictability and gives us an exact budget to work against.

Jason Snell:

Ten years later, my view of Twitter as a service is still largely framed by apps, rather than the web. If Twitter was only on the Web, I think I’d use it about as often as Facebook, which is to say, not often.

Instapaper Outage Cause & Recovery

Brian Donohue:

Without knowledge of the pre-April 2014 file size limit, it was difficult to foresee and prevent this issue. As far as we can tell, there’s no information in the RDS console in the form of monitoring, alerts or logging that would have let us know we were approaching the 2TB file size limit, or that we were subject to it in the first place. Even now, there’s nothing to indicate that our hosted database has a critical issue.

If we ever had knowledge of the file size limit, it likely left with the 2013-era betaworks contractors that performed the Softlayer migration.


We didn’t have a good disaster recovery plan in the event our MySQL instance failed with a critical filesystem issue that all of our backups were also subject to.


When it became clear the dump would take far too long (first effort took 24 hours, second effort with parallelization took 10 hours), we began executing on a contingency plan to get an instance in a working state with limited access to Instapaper’s archives. This short-term solution launched into production after 31 hours of downtime. The total time to create that instance and launch it into production was roughly six hours.


Our only recourse was to restore the data to an entirely new instance on a new filesystem. This was further complicated by the fact that our only interface into the hosted instances is MySQL, which made filesystem-level solutions like rsync impossible without the direct assistance from Amazon engineers.

Planet of the Apps

Husain Sumra:

Apple wasn’t the first choice for the show, according to Silverman. The project was initially shopped around to the big networks. The show drew major interest, but Will.I.Am brought up the show to Jimmy Iovine while at a meeting with Apple in Los Angeles. Apple was interested, and Silverman and the rest of the producers slowed down the process with the networks to give Apple a chance at securing it for Apple Music.


Cue later emphasized that Apple doesn’t just want to buy shows, denying that Apple was ever interested in purchasing The Grand Tour. Instead, Apple only wants to make shows that are unique and “create culture.”

The trailer is here. Most of the reactions I’ve seen have been negative:

Update (2017-02-15): See also: Nick Heer, Benjamin Mayo, Joe Rosensteel.

Update (2017-02-19): See also: Cabel Sasser.

Update (2017-06-28): See also: Everything wrong with Apple’s ‘Planet of the Apps’ (via Michael B. Johnson).

Monday, February 13, 2017

Swift 3 Keywords Reference

Jordan Morgan (via Ole Begemann):

So today — we’ll look at every single keyword Swift (v 3.0.1) has to offer us along with some code for each one, all in the name of booking up on our trade’s tools.

Some are obvious, some are obscure and some are sorta(ish) recognizable but they all make for great reading and learning.

Software Engineering at Google

Fergus Henderson (PDF, via Hacker News):

Write access to the repository is controlled: only the listed owners of each subtree of the repository can approve changes to that subtree. But generally any engineer can access any piece of code, can check it out and build it, can make local modifications, can test them, and can send changes for review by the code owners, and if an owner approves, can check in (commit) those changes. Culturally, engineers are encouraged to fix anything that they see is broken and know how to fix, regardless of project boundaries.


Most larger teams also have a “build cop” who is responsible for ensuring that the tests continue to pass at head, by working with the authors of the offending changes to quickly fix any problems or to roll back the offending change. (The build cop role is typically rotated among the team or among its more experienced members.) This focus on keeping the build green makes development at head practical, even for very large teams.


All code used in production is expected to have unit tests, and the code review tool will highlight if source files are added without corresponding tests. Code reviewers usually require that any change which adds new functionality should also add new tests to cover the new functionality.


Most software at Google gets rewritten every few years.

Optimizations in Syntax Highlighting

Alexandru Dima (via Hacker News):

Tokenization in VS Code (and in the Monaco Editor) runs line-by-line, from top to bottom, in a single pass. A tokenizer can store some state at the end of a tokenized line, which will be passed back when tokenizing the next line. This is a technique used by many tokenization engines, including TextMate grammars, that allows an editor to retokenize only a small subset of the lines when the user makes edits.


Holding on to that tokens array takes 648 bytes in Chrome and so storing such an object is quite costly in terms of memory (each object instance must reserve space for pointing to its prototype, to its properties list, etc). Our current machines do have a lot of RAM, but storing 648 bytes for a 15 characters line is unacceptable.


Perhaps the biggest breakthrough we've had is that we don't need to store tokens, nor their scopes, since tokens only produce effects in terms of a theme matching them or in terms of bracket matching skipping strings.


When pushing a new scope onto the scope stack, we will look up the new scope in the theme trie. We can then compute immediately the fully resolved desired foreground or font style for a scope list, based on what we inherit from the scope stack and on what the theme trie returns.

How to Stop Seeing Your Amazon Searches Everywhere

Rob Pegoraro (via Kirk McElhearn):

You can avoid this problem by doing your online shopping in a private-browsing or incognito-mode window. But it’s easy to forget to do that when you have 10 different pages open in tabs in your browser and you’re also switching between the Web, e-mail and other apps.

Instead, you can tell Amazon to stop sending you ads based on your shopping habits. To do that, visit or log into your Amazon account in a browser, click on your username in the top right corner of the page, and then click on the “Your advertising preferences” link.

You will, however, have to repeat this on each browser that you use for any Amazon shopping.

Sunday, February 12, 2017

Virtual Apple II

Virtual Apple ][ (via Hacker News):

Almost every Apple ][ and Apple IIgs game ever made, ready to play in your browser.

I was hoping to see Life & Death. They do have have Thexder and Oregon Trail, though, along with Risk, which had a better AI than the Mac version.

Previously: Apple ][js.

Update (2017-02-12): They even have some scanned user manuals.

Testing Out Snapshots in APFS

Adam H. Leventhal:

It’s 2017, and Apple already appears to be making good on its promise with the revelation that the forthcoming iOS 10.3 will use APFS. The number of APFS tinkerers using it for their personal data has instantly gone from a few hundred to a few million. Beta users of iOS 10.3 have already made the switch apparently without incident. They have even ascribed unscientifically-significant performance improvements to APFS.


We figured out the proper use of the fs_snapshot system call and reconstructed the WWDC snapUtil. But all this time an equivalent utility has been lurking on macOS Sierra. If you look in /System/Library/Filesystems/apfs.fs/Contents/Resources/, Apple has included a number of APFS-related utilities, including apfs_snapshot (and, tantalizingly, a tool called hfs_convert).


After the volume is mounted again, not only are the contents reverted (to an empty directory in this case), but any snapshots taken after the snapshot used for the revert operation are deleted as well. One might expect APFS snapshot revert to immediately take effect and restore the contents of the volume to the previous state. Some technical issues likely make that challenging, such as what to do about programs that have files within in that volume open. So seeing if and how Apple decides to expose this functionality will be interesting.

mkfile(8) Is Severely Syscall Limited

Marcel Weiher (Hacker News):

It never occurred to me that the problem could be with mkfile(8). Of course, that’s exactly where the problem was. If you check the mkfile source code, you will see that it writes to disk in 512 byte chunks. That doesn’t actually affect the I/O path, which will coalesce those writes. However, you are spending one syscall per 512 bytes, and that turns out to be the limiting factor. Upping the buffer size increases throughput until we hit 2GB/s at a 512KB buffer size. After that throughput stays flat.


The point is that the hardware has changed so dramatically that even seemingly extremely safe and uncontroversial assumptions no longer hold. Heck, 250MB/s would be perfectly fine if we still had spinning rust, but SSDs in general and particularly the scorchingly fast ones Apple has put in these laptops just changed the equation so that something that used to just not factor into the equation at all, such as syscall performance, can now be the deciding factor.

The Slow Decline of iPad Sales

John Gruber:

But put software development aside. I think the bigger problem for the iPad is that there are few productivity tasks, period, where iPad is hardware-constrained. Aldus PageMaker shipped for the Mac in 1985. By 1987 or 1988, it was easy to argue that the Mac was, hands-down, the best platform the world had ever seen for graphic designers and visual artists. By 1991 — seven years after the original Mac — I think it was inarguable. And the improvements in Mac software during those years drove demand for improved hardware. Photoshop, Illustrator, Freehand (R.I.P.), QuarkXpress — those apps pushed the limits of Mac hardware in those days.

Michael Rockwell:

iPad owners don’t buy new iPads because the one they have is just as fast as the day they bought it. By comparison, the Windows PCs that many of these users buy are at their fastest when they’re first setup. I reference Windows users because they represent the vast majority of mainstream computer users and I believe them to be the primary reason for the massive success of the iPad in its early days.


In the tech-centric circles that many of us frequent, new hardware and software features matter, a lot. But I don’t think the mainstream user is convinced to spend hundreds of dollars on a new device just because it connects to a new kind of wireless keyboard or works with a $100 drawing accessory that you have to buy separately.


The iPad upgrade cycle might be longer than any other computing device in history. This might look terrible for Apple’s financial department, but it’s a testament to how well-crafted these devices are from both a software and hardware standpoint. The lengthy upgrade cycle lends itself to high customer satisfaction ratings and repeat customers. That’s something Apple should be proud of — a computing device that doesn’t have to be replaced every few years.

Nick Heer:

Apple has long said that the iPad’s big display provides the opportunity to create a completely different app experience. At the first Retina iPad event, Tim Cook even spent stage time mocking Android tablet apps that looked like large phone apps.

But now, five years after that event, it’s not so much the apps that are scaled-up versions of a smartphone, but rather that the operating system seems largely driven by what the iPhone can do. This was an early criticism of the iPad, but I felt it was unwarranted at the time — a larger version of a familiar interface is a great way to introduce a new product category.

Chris Adamson:

Here’s a counter-argument that is being overlooked: the iPad represents effectively all of the “productivity tablet” market[…].


Now even if the Mac sells less than the iPad, the PC market as a whole is massive… much larger than tablets, and larger still than my contrived “productivity tablet” market. And Mac’s not even 10% of this giant PC market.

So, in terms of growth opportunities, which is more realistic: finding non-tablet-users to adopt the iPad for their productivity or work needs (and making the iPad more suitable for that), or flipping more of the 90% of people already using PCs to a better version of the same thing?

Previously: Apple’s Q1 2017 Results.

Update (2017-02-12): Jeff Johnson:

iPad upgrade cycle shouldn’t be the focus. Ask why new sales aren’t growing. How did iPad reach market saturation in only 4 years?

Ole Begemann:

Not sure I buy the argument that iPad sales are slow because old devices are “fast enough”. My 3-year-old iPad Air is often painfully slow.

Friday, February 10, 2017

Protecting Your Data at a Border Crossing

Jonathan Zdziarski:

Obviously, you want all of your devices encrypted and powered off at the border. There are plenty of ways to access content on devices (even locked ones) if the encryption is already unlocked in memory.


To lock down 2FA at a border crossing, you’ll need to disable your own capabilities to access the resources you’ll be compelled to surrender. For example, if your 2FA sends you an SMS message when you log in, either discard or mail yourself the SIM for that number, and bring a prepaid SIM with you through the border crossing; one with a different number. If you are forced to provide your password, you can do so, however you can’t produce the 2FA token required in order to log in.


I’ve written about Pair Locking extensively in the past. It’s an MDM feature that Apple provides allowing you to provision a device in such a way that it cannot be synced with iTunes. It’s intended for large business enterprises, but because forensics software uses the same interfaces that iTunes does, this also effectively breaks every mainstream forensics acquisition tool on the market as well. While a border agent may gain access to your handset’s GUI, this will prevent them from dumping all of the data – including deleted content – from it. It’s easy to justify it too as a corporate policy you have to have installed.

Piezo’s Life Outside the Mac App Store

Paul Kafasis (tweet, Hacker News):

The Mac App Store previously made up about half of Piezo’s unit sales, so we might have expected to sell half as many copies after exiting the store. Instead, it seems that nearly all of those App Store sales shifted to direct sales. It appears that nearly everyone who would have purchased Piezo via the Mac App Store opted to purchase directly once that was the only option. Far from the Mac App Store helping drive sales to us, it appears we had instead been driving sales away from our own site, and into the Mac App Store.


In each of the four most recent quarters, Piezo brought in more revenue than it had in the corresponding quarter a year earlier. We earned more revenue when Piezo was available exclusively through our store than when we provided the App Store as another purchasing option.

This result might seem counterintuitive. Piezo’s price remained the same, and unit sales went down, so how could we have earned more revenue? The key to understanding this is remembering the cost of being in Apple’s App Stores — 30% off the top of every sale.

Previously: 100 Days Without the App Store, Piezo Exits the Mac App Store.

Update (2017-02-14): John Biggs (via Hacker News):

App Stores are storehouses. They are great if you’re giving something away – you can grab lots of eyeballs quickly with the right strategy – but they definitely take a cut of revenue and could encroach on overall sales. The problem is that we’re stuck. We’re stuck selling through the iOS and Android app stores and, if you sell books, Amazon is the only way to go. When get locked into one way of sales we’d don’t see or accept alternatives and that hurts us.

In the end these three examples should not define a sales strategy. What they do show, however, is that for certain popular products there is little value in trusting any app store – be in Google’s, Apple’s, or Microsoft’s – to work in your favor. Direct sales are always and option and it’s quite important to figure out a strategy based on direct sales sooner than later.

Nick Heer:

The Mac App Store could have been a golden opportunity for developers. In a hypothetical world, having Apple handle credit card processing, automatic updates, quality assurance, and curation, plus putting their marketing muscle behind the store — all of these factors could have made developers happy to give up 30% of their potential revenue. But the large number and aggressive types of limitations required for apps in the store combined with Apple’s rather lax quality controls has made the Mac App Store a combined flea market and glorified Software Update utility.

Thursday, February 9, 2017

Most of the Web Really Sucks If You Have a Slow Connection

Dan Luu (Hacker News):

Despite my connection being only a bit worse than it was in the 90s, the vast majority of the web wouldn’t load. Why shouldn’t the web work with dialup or a dialup-like connection? It would be one thing if I tried to watch youtube and read pinterest. It’s hard to serve videos and images without bandwidth. But my online interests are quite boring from a media standpoint. Pretty much everything I consume online is plain text, even if it happens to be styled with images and fancy javascript.


More recently, I was reminded of how poorly the web works for people on slow connections when I tried to read a joelonsoftware post while using a flaky mobile connection. The HTML loaded but either one of the five CSS requests or one of the thirteen javascript requests timed out, leaving me with a broken page. Instead of seeing the article, I saw three entire pages of sidebar, menu, and ads before getting to the title because the page required some kind of layout modification to display reasonably. Pages are often designed so that they’re hard or impossible to read if some dependency fails to load. On a slow connection, it’s quite common for at least one depedency to fail.


While it’s easy to blame page authors because there’s a lot of low-hanging fruit on the page side, there’s just as much low-hanging fruit on the browser side. Why does my browser open up 6 TCP connections to try to download six images at once when I’m on a slow satellite connection? That just guarantees that all six images will time out!


For another level of ironic, consider that while I think of a 50kB table as bloat, Google’s AMP currently has > 100kB of blocking javascript that has to load before the page loads! There’s no reason for me to use AMP pages because AMP is slower than my current setup of pure HTML with a few lines of embedded CSS and the occasional image, but, as a result, I’m penalized by Google (relative to AMP pages) for not “accelerating” (deccelerating) my page with AMP.

Previously: Web Bloat Score Calculator, The Problem With AMP.

Update (2017-02-10): Bill Murray:

Before you marry a person you should first make them use a computer with slow internet to see who they really are.

Update (2017-02-20): See also: Juho Snellman.

Getting to Swift 3 at Airbnb


We have dozens of modules and several 3rd-party libraries written in Swift, comprising thousands of files and hundreds of thousands of lines of code. As if the size of this Swift codebase weren’t enough of a challenge, the fact that Swift 2 and Swift 3 modules cannot import each other further complicated the migration process. Even correct Swift 3 code that imports Swift 2 libraries will not compile. This incompatibility made it difficult to parallelize code conversion.


While we were excited about Swift 3’s new language features, we also wanted to understand how the update would affect our end users and overall developer experience. We closely monitored Swift 3’s impact on release IPA size and debug build time, since these have been our two largest Swift pain points so far. Unfortunately, after experimenting with different optimization settings, Swift 3 still scored marginally worse on both metrics.


A number of things have changed, but most importantly the parameter in completionBlock has changed from an implicitly unwrapped optional to an optional. This can break its usage within the blocks.


Optional protocol methods are easy to accidentally miss during a Swift 3 conversion.

Action Log Test Double

Robert C. Martin:

So how do you test-drive an algorithm like this? At first it might seem simple. For each test, create an small array of doubles, predict the results, and then write the test that compares the predicted results with the output of the algorithm.

There are several problems with this approach. The first is data-overload. Even the smallest array is a 3X3 with 9 different values to check. The next smallest is 5X5 with 25 values. Then 9X9 with 81 values. Trying to write a comprehensive set of tests, even for the 3X3 case, would be tedious at best; and very difficult for someone else to understand.

The second problem is that test-driving from the raw results forces us to write the whole algorithm very early in the testing process. There’s no way to proceed incrementally.


This is a rather different slant an a spy test double. Rather than giving us booleans and flags to inspect about the calls of individual functions, this spy is going to load the actions string with the sequence of things that happened as the algorithm proceeded.

Vizio Tracking TV Viewing

Lesley Fair:

Consumers have bought more than 11 million internet-connected Vizio televisions since 2010. But according to a complaint filed by the FTC and the New Jersey Attorney General, consumers didn’t know that while they were watching their TVs, Vizio was watching them. The lawsuit challenges the company’s tracking practices and offers insights into how established consumer protection principles apply to smart technology.

Starting in 2014, Vizio made TVs that automatically tracked what consumers were watching and transmitted that data back to its servers. Vizio even retrofitted older models by installing its tracking software remotely. All of this, the FTC and AG allege, was done without clearly telling consumers or getting their consent.

John Gruber:

The lack of respect for consumer privacy in this case is just appalling.

Nick Heer:

The FTC got involved and today announced that they would be fining Vizio the paltry sum of $2.2 million.

Update (2017-03-06): Josh Centers:

For a simpler solution, just don’t connect your TV to the Internet. If you want to use Netflix and other apps, get an Apple TV, which has the best privacy policy in the business[…]

Lorenzo Franceschi-Bicchierai (via John Gruber):

A company that sells “smart” teddy bears leaked 800,000 user account credentials—and then hackers locked it and held it for ransom.

Wednesday, February 8, 2017

Ultra Accessory Connector

Jordan Kahn (MacRumors, iMore, ArsTechnica):

Apple is planning to adopt a new connector type for accessories for iPhone, iPad and other Apple devices through its official Made-for-iPhone (MFi) licensing program. Dubbed the “Ultra Accessory Connector” (UAC), Apple has recently launched a developer preview of the new connector type to prepare manufacturing partners for the component that in some cases will replace the use of Lightning and USB connectors, according to sources familiar with the program.

Measuring in at 2.05mm by 4.85mm at the tip, the 8-pin connector is slightly less thick than USB-C, and near half as wide as both USB-C and Lightning. The space-saving connector is similar in shape to ultra mini USB connectors on the market that are often bundled as proprietary cables with accessories such as Nikon cameras (like the one pictured below).

Vlad Savov (via Dan Frakes):

People familiar with Apple’s plans tell us that the company has no intention to replace Lightning or install this as a new jack on iPhones or iPads. Instead, UAC will be used as an intermediary in headphone cables.

At present, a pair of Lightning headphones can’t be made cross-compatible with USB-C devices, and equally, USB-C headphones only work with USB-C audio sources. But if you insert UAC in the middle, you’ll be able to swap between Lightning-to-UAC and USB-C-to-UAC cables with the same pair of headphones, allowing you (admittedly with the help of a couple more dongles) to switch between the various connectors on the fly. UAC will make it possible for your headphones’ firmware to adjust on the fly, recognizing whether it’s receiving audio from a Lightning or USB-C connection and playing it back appropriately.

I Wish Apple Loved Books

Daniel Steinberg:

I’ve joked that if Eddie Cue loved reading the way he clearly loves music, then iBooks, the iBookstore, and iBooks Author would be amazing. Not only aren’t they amazing, they aren’t even good.

It’s like they’ve assigned a committed carnivore to design the meals and cook for Vegans. You need someone who loves and understands vegetables and shares the commitment to not using meat or meat products.


I was an early embracer and adopter of iBooks Author. I could produce beautiful books. The software was initially frustrating but they improved it in significant ways early.

Then they stopped.


Yesterday, I uploaded my latest version of my book to Gum Road and to iBooks. Within minutes I was getting email notifications of sales of my book on Gum Road.

An hour later my book was approved for sale on iBooks. This is remarkably quick. It used to take days. I looked online and my book wasn’t on the iBookstore yet. Also, my name was still listed incorrectly.

Via Adam C. Engst:

After an initial burst of enthusiasm, both iBooks and iBooks Author have languished, iCloud Drive’s integration with iBooks is flaky, and the iBooks Store never recovered momentum after Apple was found guilty of ebook price fixing back in 2013.

Bradley Metrock:

So let’s think about that a minute. You’re using software called “iBooks Author.” What, exactly, are you authoring? Um…iBooks. Yet, for some reason no one can explain, you can’t say that.


Next, Steinberg discusses the interplay between the EPUB format, the iBooks format, and attempting to provide his readers with updates. He nails this - it’s unnecessarily clumsy.

John Gruber:

iBooks Author was announced in January 2012, when the iPad was two years old. The iPad itself, seemingly, would be a fine device for creating books with iBooks Author. But iBooks Author remains Mac-only.

Update (2017-02-09): Nick Heer:

iBooks Author was most recently updated in September; prior to that, it was updated almost exactly one year prior. That’s a glacial pace for an app, but it isn’t out of line with many of Apple’s other Mac applications.


I think there’s a tremendous opportunity that Apple is sleeping on.

Swift and React Native at Artsy

Orta Therox:

It is pretty obvious that Swift is the future of native development on Apple platforms. It was a no-brainer to then build an Apple TV app in Swift, integrated Swift-support into our key app Eigen and built non-trivial parts of that application in Swift.

We first started experimenting with React Native in February 2016, and by August 2016, we announced that Artsy moved to React Native effectively meaning new code would be in JavaScript from here onwards.


The stricter type system in Swift made it harder to work on JSON-driven apps.


Native development when put next to web development is slow. Application development requires full compilation cycles, and full state restart of the application that you’re working on.


So, you’re thinking “Yeah, but JavaScript…” - well, we use TypeScript and it fixes pretty much every issue with JavaScript. It’s also no problem for us to write native code when we need to, we are still adding to an existing native codebase.


It’s worth highlighting that all of this is done on GitHub, in the open. We can write issues, get responses, and have direct line to the people who are working on something we depend on.

Update (2017-02-12): Ash Furrow:

So when Eloy proposed writing apps in JavaScript – JavaScript! – I was unenthusiastic. However, Eloy is the most pragmatic and level-headed developer I know, and he reached the decision to move to React Native after months of careful study, so I kept an open mind. And I’m glad I did.

I decided to look into JavaScript and started contributing to JS web projects at Artsy last year. And I was surprised to see that the modern JS development workflow is slick. Like, really slick. The tooling has been built with developer experience front of mind, and it shows. Orta goes into more detail in his post, but suffice it to say that compared to Xcode and Swift development, the JS workflow is matured and polished.

Toward a Galvanizing Definition of Technical Debt

Michael Feathers:

In Ward Cunningham’s original formulation, Technical Debt was the accumulated distance between your understanding of the domain and the understanding that the system reflects. We all start out with some understanding of a problem, and we write code to solve that problem. But, we learn as we go. If the code doesn’t keep up with that learning we continually stumble over a conceptual gulf when we add new features. The cost of adding features becomes higher and higher. Eventually, we simply can’t.

If this definition sounds unfamiliar, or a bit different than what you’ve read before, it’s probably because Technical Debt has become conflated with another concept - general systems entropy. It’s easy to write code quickly and not pay attention to good factoring. Over time, all of these small omissions of care accumulate and we end up with code that ends up looking more like a jungle than a clean understandable guide to the behavior of a system.


For a while, I’ve been using a different definition of Technical Debt. It helps teams frame their work in a way that highlights their choices and it can lead to better ones.

Technical Debt is the refactoring effort needed to add a feature non-invasively

ReadKit 2.5


ReadKit 2.5 is a major update that introduces a new design and contains various improvements and fixes. Beside the new UI, this version also adds support for the Touch Bar on the new MacBook Pro.

After my post in December lamenting the lack of updates, ReadKit got several quick bug fix updates, and now this. It’s great to see it continuing to improve.

For the last few weeks, I’ve been using ReadKit with Tiny Tiny RSS (via the Fever plug-in). This seems to work really well. It’s both faster and more reliable than Fever itself was. (I initially tried Miniflux, liking the simpler design, but ran into a series of bugs/errors with both the MySQL and SQLite backends. I also tried Feedbin and liked it, but I prefer to self-host.)

Previously: Goodbye Mint, Goodbye Fever.

Monday, February 6, 2017

How to Verify Time Machine Backups

Paul Horowitz:

Time Machine will verify the backup by comparing checksums, and it will alert the user if a problem or issue has been found. If the backup verifies fine, no issues will be reported. It’s possible the checksums will not match, indicating some sort of issue, corruption, or modification with the Time Machine backup, and Mac OS will offer instructions to attempt to correct the problem. It’s also possible the backup won’t have a valid checksum at all.


While the Verify Time Machine backups feature has existed for a long time in Mac OS X and Mac OS, it’s important to note that only modern versions of Mac OS maintain a record of checksums associated with each backup snapshot, so if the backup was made prior to 10.12 or 10.11 it can not be verified by comparing the checksum this way.

I think this only detects damaged files, not missing files. The menu command is often disabled for me, for no apparent reason.

GVFS (Git Virtual File System)

Saeed Noursalehi (via Peter Steinberger):

Here at Microsoft we have teams of all shapes and sizes, and many of them are already using Git or are moving that way. For the most part, the Git client and Team Services Git repos work great for them. However, we also have a handful of teams with repos of unusual size! For example, the Windows codebase has over 3.5 million files and is over 270 GB in size. The Git client was never designed to work with repos with that many files or that much content. You can see that in action when you run “git checkout” and it takes up to 3 hours, or even a simple “git status” takes almost 10 minutes to run. That’s assuming you can get past the “git clone”, which takes 12+ hours.

Even so, we are fans of Git, and we were not deterred. That’s why we’ve been working hard on a solution that allows the Git client to scale to repos of any size. Today, we’re introducing GVFS (Git Virtual File System), which virtualizes the file system beneath your repo and makes it appear as though all the files in your repo are present, but in reality only downloads a file the first time it is opened. GVFS also actively manages how much of the repo Git has to consider in operations like checkout and status, since any file that has not been hydrated can be safely ignored. And because we do this all at the file system level, your IDEs and build tools don’t need to change at all!

Note that the initial comments refer to “NFS+” when they mean “HFS+.” I doubt the choice of file system is the problem here. Some of the comments are interesting, though.

Previously: Facebook Makes Mercurial Faster Than Git.

Update (2017-02-09): Brian Harry (via Hacker News):

GVFS (and the related Git optimizations) really solves 4 distinct problems[…]


Looking at the server from the client, it’s just Git. All TFS and Team Services hosted repos are just Git repos. Same protocols. Every Git client that I know of in the world works against them. You can choose to use the GVFS client or not. It’s your choice. It’s just Git. If you are happy with your repo performance, don’t use GVFS. If your repo is big and feeling slow, GVFS can save you.

Looking at the GVFS client, it’s also “just Git” with a few exceptions. It preserves all of the semantics of Git – The version graph is a Git version graph. The branching model is the Git branching model. All the normal Git commands work. For all intents and purposes you can’t tell it’s not Git. There are three exceptions.

Control-T to Show “cp” Status

Dave Dribin:

Mind blown: Hit Ctrl-T while performing a long running cp command. Shows current input and output file and the percentage complete.

The Control-T sends a SIGINFO signal. It also works with dd and ping.

Building a LISP From Scratch With Swift

Umberto Raimondi (via Natasha Murashev):

In this article, we’ll implement a minimal LISP based on the 1978 paper by John McCarthy titled A Micro-Manual For Lisp - Not The Whole Truth, that defines a small and self-contained LISP, as a Swift framework that will be able to evaluate strings containing LISP symbolic expressions.

We’ll eventually use that compact interpreter to build a simple REPL (Read-Eval-Print Loop) that will interactively execute statements and print out the result of the evaluation. A playground to play around with the interpreter is also available.

The Secret Trackpad on the iPhone

David Pogue:

Still keep your finger down. At this point, hard presses also let you select (highlight) text:

  • Hard-press twice to select an entire sentence.
  • Hard-press three times to select an entire paragraph.

Or use this trick: Move the insertion point to a word; if you now press hard, you highlight that word.

Every day I use 3D Touch to select text on my iPhone and double-click to select words on my Mac, but I never realized these could be combined.

Friday, February 3, 2017

Lawsuit Claims Apple Intentionally Broke FaceTime on iOS 6

Mikey Campbell (via Husain Sumra):

A class-action lawsuit filed in California on Thursday alleges Apple schemed to force iPhone users to upgrade to iOS 7 in a bid to save money on a data services deal with Akamai, a move that rendered older hardware like iPhone 4 and 4S unusable.


Initially, calls routed through Akamai’s relay servers only accounted for only 5 to 10 percent of FaceTime traffic, but usage quickly spiked. On Nov. 7, 2012, a jury found Apple’s peer-to-peer FaceTime call technology in infringement of patents owned by VirnetX. Along with a $368 million fine, the ruling meant Apple would have to shift away from peer-to-peer to avoid further infringement.


Citing internal emails and sworn testimony from the VirnetX trial, the lawsuit alleges Apple devised a plan to “break” FaceTime on iOS 6 or earlier by causing a vital digital certificate to prematurely expire. Apple supposedly implemented the “FaceTime Break” on April 16, 2014, then blamed the sudden incompatibility on a bug, the lawsuit claims.

Previously: After Patent Loss, Apple Makes FaceTime Worse.

In Praise of OmniDiskSweeper

Rob Griffiths:

I’ve tried a bunch of these tools over the years, both graphical and text-based, but I still keep coming back to an oldie-but-goodie—and it’s free: Omni’s OmniDiskSweeper has everything I want in a disk space usage tool. It’s got an intuitive interface, and a way to either delete what I find or open the containing folder to take a closer look.

Perhaps it’s because I’m a column-view Finder kind of person, but I love the columnar drill-down layout that OmniDiskSweeper uses.

I keep coming back to it, too, because I like the interface. The biggest flaw is that there’s no built-in way to count files that the current user doesn’t have access to. However, you can use Terminal to run it under sudo.

XPoCe: XPC Snooping Utilities

Jonathan Levin (via dragosr):

XPC* is the enhanced IPC framework used in *OS. Ever since its introduction in 10.7/iOS 5, its use has exploded, as AAPL is rewriting most of its daemons to use it in place of the venerable raw Mach messages. Mach still provides the medium, but message payloads are now dictionary objects - reducing (but not eliminating) type confusion mistakes, and greatly simplifying parsing. In addition, XPC is closely tied to GCD (offering much better performance) and entitlements (greater security).

His utility lets you inject some code via DYLD_INSERT_LIBRARIES to watch the traffic.

ARM Mac Notebook Rumors

Mark Gurman:

Apple Inc. is designing a new chip for future Mac laptops that would take on more of the functionality currently handled by Intel Corp. processors, according to people familiar with the matter.


Apple engineers are planning to offload the Mac’s low-power mode, a feature marketed as “Power Nap,” to the next-generation ARM-based chip. This function allows Mac laptops to retrieve e-mails, install software updates, and synchronize calendar appointments with the display shut and not in use. The feature currently uses little battery life while run on the Intel chip, but the move to ARM would conserve even more power, according to one of the people.

This doesn’t make a whole lot of sense to me. It just doesn’t seem like it would be worth it as described.

I’m more intrigued by this Slashdot comment by Anonymous Coward:

Apple already has several ARM powered laptops drifting around internally. I’ve seen several of them with my own eyes. There’s at least five different prototypes, all constructed in plastic cases with varying degrees of complexity (some are literally just a clear acrylic box, others look more like 3D printed or milled parts designed to look like a chunky MBA or iBook).


All of them boot encrypted and signed OS images, which are fully recoverable over the internet so long as you’ve got WiFi access (similar to how their Intel powered systems do it). You cannot chose a version of the OS to load, you get whatever the latest greatest one is and that’s it. They’ve completely ported OS X to ARM (including all of Cocoa and Aqua), however a ton of utilities that normally come with OS X are missing (there’s no Disk Utility, Terminal, ColorSync, Grapher, X11, Audio/MIDI setup, etc). A lot of that functionality has been merged into a new app called “Settings” (presumably to match the iOS counterpart), which takes the place of System Preferences.

Likewise, App Store distribution appeared to be mandatory. […] The filesystem seemed a bit… peculiar, to say the least. Everything was stored in the root of the disk drive—that is to say, the OS didn’t support multiple users at all, and everything that you’d normally see in your home directory was presented as / instead. I don’t think the physical filesystem was actually laid out like this, it’s just that the Finder and everything else had been modified to make you believe that’s the way the computer worked. There was no /Applications folder anymore, your only option for launching and deleting apps was through Launchpad.

The problem with the “dump Intel for ARM” idea is that it wouldn’t work at the high end. ARM isn’t competitive there, some people really want x86 compatibility, and emulation doesn’t seem feasible. Even Apple wouldn’t alienate its customers with that sort of a switch. But what if the plan is to bifurcate the Mac line? A line of locked down ARM Macs and a line of Pros that really do look Pro in comparison?

The ARM Macs would simply drop support for all the old software. Intel-based Macs would still be around for development and other high-end users who are willing to pay more, but Apple’s focus would be on the At Ease line. It would be a middle ground between iOS and Mac: more powerful than an iPad Pro with a keyboard, and limited to apps from the Mac App Store so that it’s harder to screw up than a regular Mac. This sounds like a crazy rumor, but there is a certain logic to it.

That said, my personal bets are:

Update (2017-02-03): ATP Tipster:

Allow me to take a moment and shoot down that Slashdot ARM Mac post. Total bullshit.

Thursday, February 2, 2017

Xcode 8.3: Waiting in XCTest

Joe Masilotti:

At first glance XCTestWaiter is simply a new approach to waiting for XCTestExpectations to fulfill. However, there are a few gems hidden beneath the surface.


A big advantage of this approach is that the test suite reads as a synchronous flow. There is no callback block or completion handler. The helper method simply returns a boolean indicating if the element appeared or not.


You are now completely in control of when and how to fail your tests if an expectation fails to fulfill. This enables waiting for optional elements, like a login screen or a location services authorization dialog.


Along with the new waiter class, XCTestExpectations was subclassed to make specific expectations a little easier to write.

Previously: XCTestExpectation Gotchas, Xcode 6.0.1 Asynchronous Tests, XCTest​Case / XCTest​Expectation / measure​Block().

Finder and Terminal Are Friends

LSelect (via Dr. Drang):

lselect is an AppleScript that lets you select files in the Finder using shell glob syntax as you would to list files with ls. For an animated illustration of how it works, view this short screencast.

Curt Clifton:

As developers, we tend to spend a lot of time typing in Terminal windows. Even so, I often find it more helpful to browse directories and files in Finder. I have three little hacks that simplify moving between the two modes.

Update (2017-02-03): Austin Ziegler notes this script, which can keep a Finder window updated with the current directory in a Terminal window.

Things Every Hacker Once Knew

Eric S. Raymond:

This document is a collection of facts about ASCII and related technologies, notably hardware terminals and RS-232 and modems. This is lore that was at one time near-universal and is no longer. It’s not likely to be directly useful today - until you trip over some piece of still-functioning technology where it’s relevant (like a GPS puck), or it makes sense of some old-fart war story. Even so, it’s good to know anyway, for cultural-literacy reasons.

Emoji Logos

Paul Kafasis:

The new Logoji Instagram account is great. It takes real logos and reworks them to replace elements with standard emoji (using Apple’s emoji art, specifically).

Wednesday, February 1, 2017

Swift Classes That Conform to Protocols

Chris Eidhof:

The other day, someone asked how to have a variable which stores a UIView that also conforms to a protocol. In Objective-C, you would simply write UIView<HeaderViewProtocol>. In current Swift, you can’t write something like that. This posts shows a two workarounds.

Automatically Test Your Database Backups

Marco Arment:

The solution is to frequently and automatically test backups by:

  1. Regularly downloading the latest backup from S3 (or wherever) and performing a full restore onto a clean server.
  2. Testing its validity in a way that a human is sure to notice if it stops working properly.

The first part sounds hard, but isn’t. For Overcast, I run an inexpensive Linode server devoted to automatically fetching, installing, and testing the latest backup every day and emailing me a report.

iOS to Drop Support for 32-bit Apps

Peter Steinberger:

RIP 32-bit emulation mode in iOS 11?

Andrew Cunningham:

Beta builds of iOS 10.3, the first of which was issued last week, generate warning messages when you try to run older 32-bit apps. The message, originally discovered by PSPDFKit CEO and app developer Peter Steinberger, warns that the apps “will not work with future versions of iOS” and that the app must be updated by its developer in order to continue running. The apps still run in iOS 10.3, but it seems likely that iOS 11 will drop support for them entirely.


Apple has required 64-bit support for all new app submissions since February of 2015 and all app update submissions since June 2015, so any apps that are still throwing this error haven’t been touched by their developer in at least a year and a half[…]

Roman Loyola (Hacker News):

The switch to 64-bit only support means that older iOS devices built on 32-bit architecture will not be able to upgrade to the new iOS. This includes the iPhone 5, 5c, and older, the standard version of the iPad (so not the Air or the Pro), and the first iPad mini.


The Ars Technica article on this issue cuts to the heart of why this is so devastating: there is tons of software--software which was really interesting and I dare say “seminal” for this important era of computing; and which is not old or outdated by any sane standard--that this destroys access to going forward, for essentially no benefit.

Apple insisted that they get to curate something of critical value, but they don’t comprehend the moral weight of that responsibility, and now want to just go around burning down their Apple-branded libraries.

It makes me sad, but there definitely is a benefit to Apple and (a smaller one) to users.


I’m not sure that this means what the article says it means. Apple was selling the iPhone 5C in India until less than a year ago. Dropping support for the new OS that soon would be uncharacteristic for the iPhone.

Instead, they may simply be dropping support for 32 bit apps on a 64 bit CPU. Having to support 64 bit and 32 bit apps one a single device forces them to ship two versions of every shared library, and is probably annoying for them in various types of interprocess communication, because, for example, CGFloat and integer types are different sizes.


The fact that those apps are still 32-bit means they’re unmaintained. The fact that they’re unmaintained means that they’re likely to break at some arbitrary OS update anyway. Even common apps like Tweetbot, by reputable developers, will break on a new major version, so your app that hasn’t been updated in years is probably going to die off soon anyway.

The only way to make sure to keep those apps open is to keep from updating iOS, and if you do that this doesn’t affect you anyway.

Update (2017-02-24): Juli Clover:

In the Settings app, there’s a new “App Compatibility” section that lists apps that may not work with a future version of iOS. Tapping on one of the apps opens it up in the App Store so you can see when it was last updated.

Update (2017-04-14): Andrew Cunningham:

Putting aside that this spells the end for all kinds of old, unmaintained games and other apps from the early days of the smartphone and App Store, Apple’s complete transition to 64-bit is a unique and interesting technical achievement. Here’s the complete timeline of the transition, to date[…]

Update (2017-06-04): Eli Hodapp (tweet):

As pointed out by TA reader Severed, 32-bit apps no longer appear in App Store search results.


Well, it seems 32-bit apps are once again searchable on the App Store. We’ll need to read some tea leaves to figure out what this means, but either way there was a good 12-24 hours where 32-bit apps vanished from App Store search. Whether this was a test for something that’s coming in the future, or just a mistake on Apple’s part is anyone’s guess.

Apple’s Q1 2017 Results

Apple (Hacker News):

Apple today announced financial results for its fiscal 2017 first quarter ended December 31, 2016. The Company posted all-time record quarterly revenue of $78.4 billion and all-time record quarterly earnings per diluted share of $3.36. These results compare to revenue of $75.9 billion and earnings per diluted share of $3.28 in the year-ago quarter.

Jason Snell:

There was a time when every quarter was a record for Apple, but after last year’s rough year of regression (following a record-smashing 2015), it wasn’t a sure thing that we’d see more of those for a while. But for the holiday quarter of calendar-year 2016, Apple beat its own advance guidance and reported a record revenue of $78.4 billion.

John Voorhees:

Below, we’ve compiled a graphical visualization of Apple’s Q1 2017 financial results.

Dr. Drang has moving averages.

Jeff Johnson:

Most Apple fiscal quarters are 13 weeks long. Once in a while, however, they need a 14 week quarter. You might call it a “leap quarter”. […] What a difference a week makes! Rather than record revenue, we have another down quarter for Apple. The lone bright spot was services; everything else was a year/year decrease.

Marco Arment (tweet):

Apple and commentators can keep saying the iPad is “the future of computing,” and it might still be. But we’re starting its seventh year in a few months, and sales peaked three years ago.

What if the iPad isn’t the future of computing?

Marco Arment:

I’d say the iPad’s biggest constraints are the OS’ file and window limitations, and the health of apps, both first- and third-party.

Unfortunately, these aren’t quick fixes, and the result of “fixing” them might just be reimplementing the Mac poorly.

The replacement-cycle problem: How many iPads are mostly used for video playback? Will they be replaced with a $300 iPad or $50 Fire Tablet?

Nick Heer:

On the flip side of that coin, what if Apple treated the iPad as the future of computing, instead of upscaling iPhone features to fit the iPad’s display, or hardly paying attention to it for an entire year? Would customers respond to an earnest attempt?

John Gruber:

The peak years (2013 and 2014) were inflated because it was an untapped market. Steve Jobs was right, there was room for a new device in between a phone and a laptop, and the iPad was and remains an excellent product in that space. But people don’t need to keep buying new iPads. I think the replacement cycle is clearly much more like that of laptops than that of phones.


The other factor is that the conceptual space between phones and laptops has shrunk. iPhones have gotten a lot bigger, and MacBooks have gotten thinner and lighter.

Jason Snell:

The iPad has 85 percent of the market of tablets priced over $200. The important facts here: Apple’s not interested in selling a sub-$200 iPad, and so that means it’s doing spectacularly well in the market. The market’s just contracting. So this isn’t necessarily about the rejection of the iPad—it could be about flagging enthusiasm for the entire category of premium tablets.


The number of people buying the iPad for the first time is very strong, according to Cook, which means that the tablet market isn’t actually saturated.

Benjamin Mayo:

So many basic computing tasks are convoluted and messy on the iPad we know today. Tasks like tweeting an image embedded into a webpage in Safari, playing background music without getting interrupted, collating a handful of attachments from different recipients and sending them off in a new mail message, and so many other things that people want to do every day. Heck, it’s still not possible to look at two emails side-by-side.

Update (2017-02-02): David Sparks:

In my mind, the issue is that users are not pushing the iPad harder to do more work for them, which would naturally end up in users wanting to buy newer, faster, and better iPads. Put simply, I think the issue is software.


At last year's iPad Pro event Apple made a big deal about how the iPad is powerful enough to replace a PC laptop. I believe for a lot of people that could be true. But it's not quite there yet because of the software limitations.

Update (2017-02-03): Jeff Johnson:

The inescapable conclusion is that even if the 14th week in FY2017 Q1 was one of the slowest weeks of the past two years, that’s still enough to account for the difference with FY2016 Q1. Ergo, FY2017 Q1 did not in fact represent a “return to growth”, as so many media outlets have incorrectly reported.

Ryan Jones:

Here are the numbers on a weekly basis, with Luca’s $0.6B removed. Big change from positive to negative.

Jason Snell:

We can unravel it more if we like: You can back out a huge settlement benefit that hit the first quarter of FY16, which makes Services look even better (but doesn’t change the overall net). You can start to calculate out the channel and supply constraints and get a better sense of demand. In other words, you can make the numbers tell the story you want to tell, with charts to match, and slice it nine different ways.

But, for better or for worse, the window we get into Apple’s finances is based on its financial statements—and that means the quarters as Apple defines it.

John Gruber (tweet):

I don’t think it’s quite right to ding the quarter by a full 8 percent — the entire last week started with Christmas day — but surely some sort of correction is necessary for year-over-year comparisons.

Update (2017-02-06): Jeff Johnson:

If there was a longer discussion of the extra week during the conference call, I didn't see it, and I think it's safe to say that most people didn't see it. The press release from Apple a half hour before the conference call did not mention the extra week. Naturally, the press all ran with the press release. And after the conference call, I did not see any of the press correct themselves. So if there was a more nuanced discussion of the year-year comparison, that information did not reach the public. The headlines were an all-time record quarter, Apple is doing great, the critics were wrong, etc.

Dr. Drang:

What I can do, though, since the code is basically already written, is take a look backward and forward to see how common it is that an Apple fiscal quarter is something other than 13 weeks long.

Dr. Drang:

What’s surprising to me is how slow iPad software has advanced in the seven years since its introduction. I’ve always thought of the iPad as the apotheosis of Steve Jobs’s conception of what a computer should be, what the Mac would have been in 1984 if the hardware were available. But think of what the Mac could do when it was seven years old[…]