Monday, February 1, 2021

iMessage’s BlastDoor Sandbox

Samuel Groß (via Hacker News, MacRumors):

One of the major changes in iOS 14 is the introduction of a new, tightly sandboxed “BlastDoor” service which is now responsible for almost all parsing of untrusted data in iMessages (for example, NSKeyedArchiver payloads). Furthermore, this service is written in Swift, a (mostly) memory safe language which makes it significantly harder to introduce classic memory corruption vulnerabilities into the code base.


As can be seen, the majority of the processing of complex, untrusted data has been moved into the new BlastDoor service. Furthermore, this design with its 7+ involved services allows fine-grained sandboxing rules to be applied, for example, only the


To limit an attacker’s ability to retry exploits or brute force ASLR, the BlastDoor and imagent services are now subject to a newly introduced exponential throttling mechanism enforced by launchd, causing the interval between restarts after a crash to double with every subsequent crash (up to an apparent maximum of 20 minutes). With this change, an exploit that relied on repeatedly crashing the attacked service would now likely require in the order of multiple hours to roughly half a day to complete instead of a few minutes.

John Gruber (tweet):

This is a big deal, and from what I understand, a major multi-year undertaking by the iMessage team. Cimpanu’s report makes it sound like it’s an iOS 14 feature, but it’s on MacOS 11, too — it’s an iMessage feature.


10 Comments RSS · Twitter

Old Unix Geek

On the one hand, that's clever. On the other hand, it's an insane amount of complexity to avoid fixing bugs in some parsers. Keep It Simple Stupid, and all that. I guess it's worth a one-handed clap.

Sure would be nice if it was possible for normal apps to split unverified data handling into a separate process.

@Juri Yep, I’ve found that very useful on macOS, not just for untrusted data but also to work around macOS memory leaks and crashing bugs. Too bad Apple doesn’t allow it on iOS.

I'm not sure it's a good thing that writing a simple messaging app requires system level understanding, and 1337 h4x0r attack mitigations.

there are hundreds of popular chatting and messaging apps on the platform -- and i suspect none of the others will be written with this sort of care.

shouldn't the platform provide these as features for all apps? it just seems like an odd thing for a chat app to concern itself with.

perhaps that will come in a later OS update someday. some features start locally in one app before rolling out as an OS feature, I'm just surprised to see "security" is now among them.

>Yep, I’ve found that very useful on macOS, not just for untrusted data but also to work around macOS memory leaks and crashing bugs. Too bad Apple doesn’t allow it on iOS.


>On the one hand, that's clever. On the other hand, it's an insane amount of complexity to avoid fixing bugs in some parsers.

I only skimmed the article, and I'm not quite sure what makes Blastdoor special. It has its own sandbox profile, yes… but iMessage/Messages already comes with… _multiple_ sandbox profiles in additional to this new blastdoor one:


All of these seem to be related to iMessage, Messages, or its ancestor iChat, and as the flowchart shows, messages _already_ went through multiple processes.

So, one, it's a bummer that third parties on iOS are barred from designing this kind of architecture (because they can't run XPC services of their own, nor, to my knowledge, ship their own sandbox profiles), but also, two, I'm not sure why yet another separate process was needed when there are already half a dozen?

>shouldn't the platform provide these as features for all apps?

To a point, yes.

>it just seems like an odd thing for a chat app to concern itself with.

Well, yes and no. These days, sandboxing portions of your app into separate processes isn't that unusual. If you have, say, a video player, it's a sound security strategy to have one process for networking (to fetch the video stream), a second process for the codec (to transform the compressed video into something playable), and a third process for the user interface. It is quite common for codecs, parsers, etc. to have weird edge cases that can be exploited, and if the attacker (via the network) can only reach a process that isn't concerned with parsing, that's a significant reduction of the attack surface.

However, the extent to which Apple is doing this seems unusual.

This is a great design. As others have said, it's a nice trick to use XPC on macOS to isolate vulnerable or problematic code, no matter how "simple" it might be. That's especially true for something that regularly processes input from the outside world, like the Messages app.

Old Unix Geek


I disagree. Bugs are usually due to too much complexity. This solution adds complexity to work around bugs, i.e. the results of too much complexity. I've also written multi-process XPC based code. Although my code was solid, and worked both on x86 and PPC, it's something I did reluctantly because it added complexity and reduced determinism. More complexity and less determinism is harder to think about. That makes for more bugs, and makes those bugs harder to reproduce and debug.

Given how software is becoming buggier, it seems to me that we need to return to "less complexity is better" philosophies. Just take this crazily long thread for example: The summary of this long story is basically that Apple can't write a working email client. Email has been around since the 1960s. Moving messages between folders can literally be as simple as a text file from one folder to another. Why can't they make this work? My guess is that their implementation has become too complicated, so that it's not immediately obvious what is going wrong, and no one can reproduce the error to debug it within the time for that task allotted by management. Is the cost of losing some of your email, potentially losing your job, made up for by having a slick interface? I'd venture most people wouldn't think so.


Yes, two processes instead of half a dozen seems a bit more reasonable...

Old Unix Geek - dunno if you'll see this cos I'm late to the party. I'm quite a big fan of apple but I always appreciate your thoughtful comments - a lot of the time, people are so snarky about Apple I can't see anything positive to gain from reading what they say. But your stuff is always interesting.

Why do you think Apple can't write an email client now? I'd guess that you believe they simply don't have the kind of low-level "simple" thought processes necessary - is that it? Can they get it back? Did they ever have it?

I'm not a developer. I'm a subway train driver in London, so my world view of stuff is very simple - based in electro-pneumatics and relays and magnets and air supplies. So to me, solutions are always simple, like "just get someone to start from scratch".

Is such a solution ever possible for a corp like Apple? To get a team to just start again within existing systems and see what they can do?

Lol somehow I've made you the spokesman for the "what should Apple do next?" movement. Sorry!

Hi Tony, thanks for your nice comment!

I haven't reverse engineered the Mail tool and I don't work at Apple, so what I write will be some somewhat informed speculation. If I did, they would lock my mouth shut every night before I left the campus!

However, my impression is that the problem lies with management's priorities and with modern software development practices.

In the (G)olden days of yore, the About field of Mac apps told you who wrote them: a few good programmers. This changed as the list of contributors grew. But a good program is written by people who have a clear idea of what the problem is and what the solution is. The more people, the less likely everyone actually shares a single understanding. The assumption here is that programmers are not fungible: a couple of guys know Mail inside out. Their job is to keep that tool working and up to date. They don't do anything else. And the tool is ready when it's ready... not necessarily when marketing wants it done.

In the (G)olden days of yore, replacing buggy software was expensive. You'd have to replace the computer's ROMs, or send out new floppies which cost real money to duplicate. It's like cars: recalls are expensive, so most car manufacturers test their products very carefully before shipping them.

Today, however, we have the internet. Because software is easily upgradeable over the internet, many managers now feel that testing is less important than other tasks. The result is that a lot of software is now delivered broken. A lot of software is frankly only tested by users!

So what does Apple feel to be more important than quality? Features. Every year a new release comes out with new features.

Apple has been pushing new features for some time now. The obvious ones have to do with changing styling to be "fresher". But there are also fads in programming land, many of just end up dying on the vine: anyone remember the fuss about using Java to write Apps? Garbage collection? OpenCL? to name but a few. Now we have Swift and SwiftUI. They're not necessary, and Apple could have continued to build on Objective-C. That's what they would have done, had stability been the key concern: staying with Objective-C would have been less disruptive and more reliable. Apple has also been developing multiple OS' simultaneously, which leaves their programmers little slack: MacOS, iOS, WatchOS, AppleTV OS, the OS inside the HDMI cable, etc.

Companies which emphasise features over reliability often consider programmers to be fungible: so-and-so is free, this needs to be done, let's get so-and-so to do it. Obviously that means so-and-so gets along by understanding just enough of whatever code-base (s)he's working on to get the task done, rather than taking the time to clearly understand the full scope of the problem and solution. That also means so-and-so won't know enough to test all the edge cases. And of course, different people have different strengths and weaknesses. It seems to me that this must be happening at Apple.

Each new feature adds to the product's complexity: rather than maintaining a clear understanding and solution to the problem, new stuff is bolted on. That works for a while, particularly if the original code was well architected, but over time the software gets slower and clunkier. Unless someone goes back, truly understands the problem and solution in depth, and rewrites / refactors the code to simplify it, all of which takes a lot of time, people just increase complexity to "fix" the slowness (or to get around the security implications of parser bugs). But refactoring code is often unpopular: it's tough to do well and employees are measured on features, not program stability.

Let's consider Mail. One can make it faster by increasing complexity as follows. Let's say your email is stored as text files in a folder. To show the list of current emails, you could just look through the latest files in the email folder, and extract their Subjects, Dates and Senders and print that out. But your user-interface (UI) will be a little slow when paging through all your emails. So instead you could parse each email as it arrives and put that information in a database. The database is nice and fast, and your UI is snappy, but now the same information is in two places, and if information is lost or corrupted in one place, the program gets terribly confused. Another way to make things feel snappy is to use multi-threading. Your computer has multiple CPUs, why not use one to do work that takes time, while the other one makes the UI seem nice and responsive? The problem is again the same: if the program's threads do not agree what state the program is in, all hell breaks loose. In complex programs, that happens a lot.

Even without adding unnecessary complexity, the problem space is actually getting more difficult. In the old days, emails were dumb ASCII text: very little could go wrong. But then we made emails "prettier" by letting them include HTML and Unicode. That meant adding an enormous piece of complexity: a web browser's rendering engine. But then other people used HTML's capabilities to track whether you looked at your email or not, so called web-tracking. So now programmers had to add complexity to defend against that. And then other people realised they could use the fact that in Unicode characters that are encoded differently are rendered very similarly or identically to send people to fake websites whose addresses appear to be the same as genuine ones, thereby stealing their passwords. So again programmers had to add complexity to defend against that. And so on.

So basically, I think the real job of software engineers is to tame the complexity beast: It's very difficult to find a solution that is as simple as possible but no simpler. It has become harder because the intrinsic complexity of the problems to solve has increased. But it also has become harder because Apple's management is making trade-offs that do not reduce complexity: adding unnecessary features, rewriting what doesn't need rewriting, not simplifying what does need to be simplified, probably treating programmers as fungible, maintaining too many product lines, and not testing software in depth before it is released.

Can this all be fixed? Sure, but it would require a lot of effort and a significant change of attitude. Apple would need to prioritise stability over features. But it's easier, more fun, and more attractive to users and programmers alike to see/work on new features (e.g. M1). So will it be fixed? Perhaps only when Apple's reputation has been harmed by their bugs.

Thank you so much for such a full reply; that's given me such a lot of insight (to the point where I actually don't think I can add anything!) and I am truly grateful to you for taking the time.

Thank you!

Leave a Comment