Archive for January 17, 2019

Thursday, January 17, 2019

Stack Allocation for Non-Escaping Swift Closures

aschwaighofer has a pull request for stack-allocating Swift closures.

Slava Pestov:

Short history of non-escaping functions:

- Swift 4.1 and earlier: type checker enforcement; same ABI as escaping
- Swift 4.2: new ABI - the context is a trivial pointer and not ref-counted like with escaping
- now: non-escaping contexts allocated on stack

The ABI change was key here - Arnold frontloaded the changes before we started locking down, now stack-allocation is “just” an optimization

And ancient pre-history for those who weren’t around at the time:

- Swift 2.2 and earlier: all function values escaping by default, opt-in @noescape attribute for parameter types
- Swift 3: @noescape becomes default for function parameters, @escaping added to opt-in

More trivia: In ancient Swift the accepted idiom to turn a non-escaping function into an escaping one was unfortunately an unsafeBitCast(). The compiler added a special withoutActuallyEscaping form and started screaming about casts in 4.0 so that we could stage in the ABI change

Previously: Optional Non-Escaping Swift Closures.

Update (2019-01-23): Matt Gallagher:

I played around with Swift master’s new stack allocated closure contexts today. My “capturing closure” mutex test case from this article improved 10x from 2.051 seconds to 0.212 seconds. Putting it within 20% of the inlined version.

Unfortunately, I needed to disable runtime exclusivity checking to get this performance. With exclusivity on, performance was 0.384 seconds (nearly 100% slower). Seems like this code should be statically checkable for exclusivity. Hope this improves.

Another unfortunate point: DispatchQueue.sync’s closure still isn’t optimized to the stack. I think this is a consequence of the stdlib’s interface around dispatch_queue_sync. I hope it gets resolved soon. I’d rather just use DispatchQueue.sync and not worry about performance.

Ole Begemann:

Stack allocation doesn’t work yet for Objective-C blocks. I suspect that also applies to wrappers like DispatchQueue.sync.

Acorn 6.3 Postmortem

Gus Mueller:

Apple added a new feature to its latest iPhones in the iOS 12 update called “Portrait Matte”. It’s a special image embedded in HEIC images which is based off the depth data and some machine learning in your photo. You can then use this image as a mask to blur parts of your image (which is what the iOS “Portrait” camera setting does), or you can use this data to remove backgrounds.

But how should Acorn expose this matte? My first stab was to have Acorn add the matte as an additional layer. After playing with it a bit, it just felt off. So I ended up adding the matte as a mask to the main layer when opening the image. But folks are obviously going to want to do more than just mask out the background so I added new features to Acorn where you could easily drag and drop the layer mask into into its own layer. I also made it easy to move an existing layer to another layer’s mask via drag and drop. I can’t predict what people are going to want to do with the mask, but I might as well make it easy to move around.

It was also during this development that I found some bugs in Apple’s My Photo Stream. The matte was showing up rotated incorrectly when opening images out of Photos. At first I figured I was just reading the data wrong, but nope- under certain conditions when images with the portrait mask were uploaded to MPS, the rotation data from the camera went missing. After some communication and a Radar filed at Apple, this bug was fixed in an OS update. Bug fixes like this don’t happen very often, but when they do it makes filing all the other Radars worth it. Mostly.

Big Win for Web Accessibility in Domino’s Pizza Case

Lainey Feingold (via Jared Spool):

Circuit Court of Appeals gave a big win to digital accessibility in a case against Domino’s Pizza. The lower court had ruled for Domino’s and tossed the case out of court. The appeals court reversed, ruling that the ADA covers websites and mobile applications and the case can stay in court.

[…]

The case will now go back to the lower federal court in California. As the appellate judges concluded, “We leave it to the district court, after discovery, to decide in the first instance whether Domino’s website and app provide the blind with effective communication and full and equal enjoyment of its products and services as the ADA mandates.”

Update (2019-01-23): Eli Schiff:

The US Department of Justice is insane. They require your site to be “Accessible” But provide zero guidelines. And then they laugh at you for not being in compliance even though there is no standard!

Ryan Rich:

Surprisingly this is how the majority of compliance works. No framework or regime is going to tell you exactly what to do. It’s why we have 3rd party auditing firms. Maybe there’s an opportunity in there for accessibility auditing. I doubt it though. No one cares enough.

Update (2019-01-28): See also: Ashley Bischoff and Eli Schiff.

How Facebook Keeps Messenger from Crashing on New Year’s Eve

Amy Nordrum (via Hacker News):

In addition to shifting loads, the Messenger team has developed other levers that it can pull “if things get really bad,” says Ahdout. Every new message sent to a server goes into a queue as part of a service called Iris. There, messages are assigned a timeout—a period of time after which, that message will drop out of the queue to make room for new messages. During a high-volume event, this allows the team to quickly discard certain types of messages, such as read receipts, to focus its resources on delivering ones that users have composed.

[…]

Georgiou says the group can also sacrifice the accuracy of the green dot displayed in the Messenger app that indicates a friend is currently online. Slowing the frequency at which the dot is updated can relieve network congestion. Or, the team could instruct the system to temporarily delay certain functions—such as deleting information about old messages—for a few hours to free up CPUs that would ordinarily perform that task, in order to process more messages in the moment.

[…]

“You can bundle some of those together into a single large request before you send it downstream. Doing that, you reduce the computational load on downstream systems.”

Batches are formed based on a principle called affinity, which can be derived from a variety of characteristics. For example, two messages may have higher affinity if they are traveling to the same recipient, or require similar resources from the back end. As traffic increases, the Messenger team can have the system batch more aggressively. Doing so will increase latency (a message’s roundtrip delay) by a few milliseconds, but makes it more likely that all messages will get through.