Friday, October 15, 2021

The Risks of Client-Side Scanning

Ben Lovejoy:

The British government has expressed support for Apple’s now-delayed CSAM scanning plans, and says that it wants the ability to scan encrypted messages for CSAM, even where end-to-end encryption is used.

Tim Hardwick:

More than a dozen prominent cybersecurity experts hit out at Apple on Thursday for relying on “dangerous technology” in its controversial plan to detect child sexual abuse images on iPhones (via The New York Times).

The damning criticism came in a new 46-page study by researchers that looked at plans by Apple and the European Union to monitor people’s phones for illicit material, and called the efforts ineffective and dangerous strategies that would embolden government surveillance.

Hal Abelson et al. (PDF):

Some in industry and government now advocate a new technology to access targeted data: client-side scanning (CSS). Instead of weakening encryption or providing law enforcement with backdoor keys to decrypt communications, CSS would enable on-device analysis of data in the clear. If targeted information were detected, its existence and, potentially, its source, would be revealed to the agencies; otherwise, little or no information would leave the client device. Its proponents claim that CSS is a solution to the encryption versus public safety debate: it offers privacy -- in the sense of unimpeded end-to-end encryption -- and the ability to successfully investigate serious crime. In this report, we argue that CSS neither guarantees efficacious crime prevention nor prevents surveillance. Indeed, the effect is the opposite. CSS by its nature creates serious security and privacy risks for all society while the assistance it can provide for law enforcement is at best problematic. There are multiple ways in which client-side scanning can fail, can be evaded, and can be abused.

Bruce Schneier:

It’s not a cryptographic backdoor, but it’s still a backdoor — and brings with it all the insecurities of a backdoor.

[…]

We had been working on the paper well before Apple’s announcement. And while we do talk about Apple’s system, our focus is really on the idea in general.

Ross Anderson:

We did not set out to praise Apple’s proposal, but we ended up concluding that it was probably about the best that could be done. Even so, it did not come close to providing a system that a rational person might consider trustworthy.

Even if the engineering on the phone were perfect, a scanner brings within the user’s trust perimeter all those involved in targeting it – in deciding which photos go on the naughty list, or how to train any machine-learning models that riffle through your texts or watch your videos. Even if it starts out trained on images of child abuse that all agree are illegal, it’s easy for both insiders and outsiders to manipulate images to create both false negatives and false positives. The more we look at the detail, the less attractive such a system becomes. The measures required to limit the obvious abuses so constrain the design space that you end up with something that could not be very effective as a policing tool; and if the European institutions were to mandate its use – and there have already been some legislative skirmishes – they would open up their citizens to quite a range of avoidable harms.

Previously:

Comments RSS · Twitter

Leave a Comment