Archive for April 10, 2026

Friday, April 10, 2026

Privacy & Security Settings Don’t Show Intent-Based Access

Howard Oakley (Hacker News):

Thus, access to a protected folder by user intent, such as through the Open and Save Panel, changes the sandboxing applied to the caller by removing its constraint to that specific protected folder. As the sandboxing isn’t controlled by or reflected in Privacy & Security settings, that allows TCC, in Files & Folders, to continue showing access restrictions that aren’t applied because the sandbox isn’t applied.

[…]

It’s possible for an app to have unrestricted access to one or more protected folders while its listing in Files & Folders shows it being blocked from access, or for it to have no entry at all in that list.

[…]

Most concerning is the apparent permanence of the access granted, requiring an arcane command in Terminal and a restart in order to reset the app’s privacy settings.

I was aware that access could be granted in this way, but I think I assumed that it only lasted until the app quit. Oakley says that it actually persists until you run tccutil reset All and restart. (I guess the specific TCC identifier is undocumented; clearly it’s not SystemPolicyDocumentsFolder.)

I generally have the opposite problem, with access not lasting as long as expected:

Previously:

Notifications Privacy

Joseph Cox:

The FBI was able to forensically extract copies of incoming Signal messages from a defendant’s iPhone, even after the app was deleted, because copies of the content were saved in the device’s push notification database, multiple people present for FBI testimony in a recent trial told 404 Media.

Rosyna Keller:

Push Notifications can be sent encrypted (server controls the encryption) and decrypted locally with a UNNotificationServiceExtension running on the device. Signal and other E2EE apps do this.

But then the decrypted notification gets saved to the database.

Rosyna Keller:

So iOS should probably delete an app’s entries from the notifications database when said app is deleted…

More than that, you may not want certain notifications to even be posted. As I discussed back in 2015, the Notification Center settings only control what’s displayed; turning notifications off there does not prevent the notifications from being generated and stored in the database. These days, the database is protected by TCC, but the information is still written to disk. For more privacy, apps should have their own settings that prevent the information from being sent to the system in the first place.

Marcus Mendes (Hacker News):

Signal’s settings include an option that prevents the actual message content from being previewed in notifications. However, it appears the defendant did not have that setting enabled, which, in turn, seemingly allowed the system to store the content in the database.

Patrick Wardle:

AuRevoir (French for ‘goodbye’) is a simple utility to view and remove notifications from Apple’s Notification Database.

Previously:

Mythos and Glasswing

Rich Mogull:

Anthropic, the company behind the Claude AI chatbot, made two security announcements that were shocking for many but seen as inevitable by those of us working in AI security. First, it announced Mythos Preview, a new, non-public AI model that turns out to be startlingly good at finding security flaws in software. The second was Project Glasswing, Anthropic’s program for getting that capability into the hands of the companies best positioned to fix those flaws before anyone else can exploit them. Apple is one of those companies.

As much as I’d like to downplay the announcements, Mythos and Project Glasswing are very big deals on their own, and harbingers for the future of digital security. Mythos was able to find and exploit new vulnerabilities in every major operating system, including a bug in OpenBSD, an operating system famous for its security, that had been sitting there unnoticed for 27 years.

[…]

We are at the start of a period in which finding software flaws that affect everyday users will become dramatically easier for both attackers and defenders. […] However, over the long run, I believe using AI to identify security vulnerabilities favors defenders, because developers can find and fix many more bugs before shipping software to the public.

Anthropic has a habit of making wild and scary public statements that seem designed to generate headlines and funding but sort of fall apart upon scrutiny. I initially dismissed this as more of the same, but people seem to be taking it seriously.

Paul Haddad:

Our model is so good, it’s not safe to release, yet. Has to be one of the greatest AI marketing stunts ever.

Ben Thompson:

There’s reason for cynicism, given Anthropic’s history, but the part of the “Boy Cries Wolf” myth everyone forgets is that the wolf did come in the end.

Daniel Jalkut:

If Anthropic has really developed an LLM that can suss out security weaknesses better than any other AI, the US government would be foolish to continue shunning them.

Or, rather, if the government believes the marketing, it may want to take control of the company and its technology, like how it restricted restricted civilian nuclear research.

Ben Thompson:

In fact, Amodei already answered the question: if nuclear weapons were developed by a private company, and that private company sought to dictate terms to the U.S. military, the U.S. would absolutely be incentivized to destroy that company.

Previously: