Wednesday, October 8, 2025

Lessons From San Bernardino and ICEBlock

Wiley Hodges:

I used to believe that Apple were unequivocally ‘the good guys.’ I passionately advocated for people to understand Apple as being on the side of its users above all else. I now feel like I must question that.

[…]

The event that represented a turning point for that skeptical view of Apple was the stand against the FBI over the San Bernardino case. You took a risky stand that was in keeping with the principles you had articulated for the company. The result was bigger than the one case: that act of lawful, principled defiance of government intimidation and jawboning helped to convince people that Apple’s actions and stated ideals were in alignment; that the company was walking the walk as well as talking the talk.

[…]

Acceding to a government ‘demand’ without demanding that the government follow legal process in order to back up its request (or at least shedding light on how the government did follow such process) raises the question of how easily Apple will accede to other requests.

[…]

Will Apple give data on the identities of users who downloaded the ICEBlock app to the government? Will Apple block podcasts that advocate points of view opposed to the current US administration? I imagine and hope that these are ridiculous questions, but without a clearer demonstration of Apple’s principled commitment to lawful action and due process, I feel uncertain.

Via John Gruber:

But, exactly as many critics of the App-Store-as-exclusive-distribution-point-for-native-software model have long warned, it’s proven to be a choke point that Apple was unwilling to defend.

I don’t think the problem is really Tim Cook or whoever at Apple made the ICEBlock decision last week. The current situation is just the symptom of a decision made long ago: for Apple to be a choke point for app distribution. If your solution to government overreach is to depend on the right person being in charge, who will say no, you’ve already lost.

Apple understands this with customer privacy. If you don’t want to have to give up user data, you design the system to store as little of it as possible, and you try to store everything else in such a way that you can’t actually access it. There have been flaws in the execution, but Apple has clearly articulated this principle and worked towards it. What you don’t do is upload the user’s most private data to iCloud, encrypt it with a password that only Tim Cook knows, and hope that he’ll never access it, because you trust him. Maybe he wouldn’t, but he won’t be there forever, and ultimately there’s not much Apple can do if it gets a legally valid request for something it can easily provide.

Yet that’s what Apple’s done with app distribution. They designed a system with a kill switch, and now people are surprised and upset that they used it. The problem is not that they pressed the button this one time when you didn’t want them to. The problem is that there is a button and Apple likes having it. They value it more than your right to use your own device as you see fit. They justify it by saying that the button is there for your protection.

Hodges is asking Tim Cook and his team to “more clearly explain the basis on which” they pressed the button, but I don’t think that’s the right question at all. If we were talking about privacy, would you be satisfied with a secure golden key accompanied by an essay about when it would be OK to use it? Would you even take such a proposal seriously?

The lesson of San Bernardino is not really that Tim Cook said “no.” It’s that he could say “no” because asking Apple to exploit an iPhone/iOS backdoor (build an “entirely new operating system,” as Apple put it) was different from asking Apple for data that it already had. (The FBI asked for that, too, and Apple provided it, as I believe it should have.) But Apple realized that the backdoor made the system insecure and removed it in subsequent iPhones. Now, at least in theory, no one has to rely on Cook saying “no” because he can’t say “yes.” Obviously, the analogy with app distribution is that the only way to prevent the kill switch from being used is to remove it.

Previously:

5 Comments RSS · Twitter · Mastodon


There's a lot of fear-mongering about the risks, however academic and unproven, to side-loading. So far we have yet to see anything substantial Materialize. Don't like it, go to Android is the most common refrain. There's also a compelling if uninformed belief Apple somehow has cultivated some mythic and unique lion's share of safety with a walled-garden approach. That term alone is used by Apple-faithful not in any pejorative meaning of the word, but in a locked-eyes, dead-serious stare-you-in-the-face fervor. Meanwhile that faith is misplaced as can plainly be seen by the myriad scam apps that are slowly (if ever) removed because they net the company serious cash. The chickens have come home to roost but the cognitive dissonance is too much to bear for many fans. I'm sure the silence will be largely deafening in those circles. For everyone else that sees the long game play out, you have to wonder what if any tipping-point would ever compel people towards a mea-culpa. Or is it just that the Internet has thoroughly insulated people to constructive criticism and opposing points of view that most people occupy an echo chamber that serves as an unintentional, propagandistic indoctrination arm.


Ironically, the DMA, with the mandated availability of third-party app stores, didn't address the kill switch issue.
Because of notarization, Apple can still wipe any app, even those outside of the official app store.
To my knowledge, the only way for a developer to be immune to that kill switch is by offering a web app.
Funnily enough, Steve Jobs initially thought it would be the main way third-party "apps" would be distributed on the first iPhone.


The real tragedy and the reason the status quo is so hard to break is that both sides of the debate are right to some extent: while having a master kill switch is wholly undesirable for all the excellent reasons that MJTsai and others have pointed out, countless users will install anything on their devices.

The (very) relative security of smartphones has allowed them to become indispensable devices. An entire economy relies on the fact that is is not utterly suicidal for the average, non-technical user to put random games, dating apps, ID cards, and banking apps onto the same device. The worlds of commerce and finance expect every person on this planet to possess a device that is both full of junk and reasonably secure, even if it is never consciously secured or updated.

Can this be achieved with extreme sandboxing? Sure! But then comes the EU with demands for extreme interoperability — which, again, are not ideologically wrong at all, but ignore all of the above. Plus, extreme sandboxing is extremely limiting, as early versions of iOS and iPadOS showed: there is only so much one can do (or sell) on a device with no accessible filesystem, no inter-app communication, etc.

Since we cannot have watertight sandboxing, both for legal and technical reasons, and since we cannot under-estimate the crazy things that will happen across billions and billions of smartphones, I understand why Apple and Google are reluctant to let their grip go. A kill switch is a last resort, but last resorts are good to have at this scale. (Remember when Apple remotely un-installed parts of the Zoom client from every Mac? That was both a terrifying capability and a very welcome development.)

Remember, we on the Internet write about replacing our phones every year, even if we do not necessarily do so. A large number of commentators run iOS betas as a matter of course. In the “real world,” people go into stores every day to buy fresh, unopened devices that run already obsolete versions of their operating systems and will never be updated. This is why companies like Meta support positively ancient versions of Android and iOS: “real people” do use them for years after the tech world has moved on. And don’t let’s get started on “environmentally responsible” second- or third-hand devices that have been “refreshed” in shady shops. Good for mother earth they certainly are, but secure? Hell no.

Yes, things more or less work on computers and user education would allow a great deal more flexibility, but the whole point of the smartphone is that it started life as an “appliance” the Disneys, Banks of America, and DMVs of this world could trust by default to faithfully spy on you and protect their Intellectual Properties. Turning smartphones into general-purpose computing devices would require a fundamental shift in how users and corporations alike think about them.

My hope is that Apple and Google will, at some future point, realise they have backed themselves into a legal corner. With the US, the EU, China, Russia, India, and other large markets each demanding conflicting and contradictory capabilities, it will, at some point, become easier and cheaper for them to just wash their hands of the whole problem, like they do with computers. Maintaining a kill switch may well, with a bit of luck, become economically and politically unfeasible, but this won’t happen overnight. It will take a lot — a lot! — of wasted development time and a lot — a lot! — of random, politically motivated fines for the App Store or the Play Store to stop making economic sense.


A lot of the talk about installing apps being too dangerous makes it sound like we are all running Windows XP unpatched.


Apple are the East India Company of our time.

Leave a Comment