Friday, July 23, 2021

Through the Blast Door

Nick Heer:

This weekend’s first batch of stories from the “Pegasus Project” — a collaboration between seventeen different outlets invited by French investigative publication Forbidden Stories and Amnesty International — offers a rare glimpse into the infrastructure of modern espionage. This is a spaghetti junction of narratives: device security, privatized intelligence and spycraft, appropriate targeting, corporate responsibility, and assassination. It is as tantalizing a story as it is disturbing.

“Pegasus” is a mobile spyware toolkit created and distributed by NSO Group. Once successfully installed, it reportedly has root-level access and can, therefore, exfiltrate anything of intelligence interest: messages, locations, phone records, contacts, and photos are all obvious and confirmed categories. Pegasus can also create new things of intelligence value: it can capture pictures using any of the cameras and record audio using the microphone, all without the user’s knowledge. According to a 2012 Calcalist report, NSO Group is licensed by the Israeli Ministry of Defense to export its spyware to foreign governments, but not private companies or individuals.

OCCRP:

The phones of Panyi, Thakurta, and Vaqifqizi were analyzed by Amnesty International’s Security Lab and found to be infected after their numbers appeared on a list of over 50,000 numbers that were allegedly selected for targeting by governments using NSO software. Reporters were able to identify the owners of hundreds of those numbers, and Amnesty conducted forensic analysis on as many of their phones as possible, confirming infection in dozens of cases. The reporting was backed up with interviews, documents, and other materials.

[…]

The strongest evidence that the list really does represent Pegasus targets came through forensic analysis.

Amnesty International’s Security Lab examined data from 67 phones whose numbers were in the list. Thirty-seven phones showed traces of Pegasus activity: 23 phones were successfully infected, and 14 showed signs of attempted targeting. For the remaining 30 phones, the tests were inconclusive, in several cases because the phones had been replaced.

John Scott-Railton:

We @citizenlab conducted peer review.

Here’s an explainer THREAD.

Daniel Cuthbert:

NSO Group has a full zero-click zero-day iMessage exploit chain that can install the Pegasus spyware on the latest version of iOS at the time of writing (14.6).

Craig Timberg, Reed Albergotti, and Elodie Guéguen:

Pegasus can collect emails, call records, social media posts, user passwords, contact lists, pictures, videos, sound recordings and browsing histories, according to security researchers and NSO marketing materials. The spyware can activate cameras or microphones to capture fresh images and recordings. It can listen to calls and voice mails. It can collect location logs of where a user has been and also determine where that user is now, along with data indicating whether the person is stationary or, if moving, in which direction.

And all of this can happen without a user even touching her phone or knowing she has received a mysterious message from an unfamiliar person — in Mangin’s case, a Gmail user going by the name “linakeller2203.”

Ivan Krstić:

For over a decade, Apple has led the industry in security innovation and, as a result, security researchers agree iPhone is the safest, most secure consumer mobile device on the market. […] Attacks like the ones described are highly sophisticated, cost millions of dollars to develop, often have a short shelf life, and are used to target specific individuals. While that means they are not a threat to the overwhelming majority of our users, we continue to work tirelessly to defend all our customers, and we are constantly adding new protections for their devices and data.

Timberg et al.:

The investigation found that iMessage — the built-in messaging app that allows seamless chatting among iPhone users — played a role in 13 of the 23 successful infiltrations of iPhones.

[…]

In a 2,800-word email responding to questions from The Post that Apple said could not be quoted directly, the company said that iPhones severely restrict the code that an iMessage can run on a device and that it has protections against malware arriving in this way. It said BlastDoor examines Web previews and photos for suspicious content before users can view them but did not elaborate on that process.

It’s not clear to me how this was done. Is there a flaw in the BlastDoor sandbox? Or is Messages not actually using it for all decoding of untrusted data, e.g. images?

Reed Albergotti:

Apple has so many bugs that it can’t fix them all, and can take years to implement fixes. It created a bug bounty program in 2016, which it says pays the most in the industry. But inside and outside the company, the view is that it has room for improvement. A lot of room.

One former employee told me the security team would send canned responses (to ensure they would not be vetoed by the marketing team) to researchers who submitted bugs. That kind of communication does not lead to good relationships with security researchers.

[…]

Apple is famously shy about sharing anything, especially acknowledging problems, and that is true when it comes to security. Apple argues that it’s better that way. The less hackers know, the better. That is why Apple makes it difficult to even locate traces of malware on iPhones.

As @craiu told me, that means we don’t know the extent of the problem. He said if Apple allowed more analysis of iPhones for malware, it would generate bad press, but make iPhones more secure.

Stefan Esser:

With PEGASUS in the news again. Never forget that behind closed doors people will tell you that when PEGASUS was found the first time in the wild Apple forbid researchers to put the samples in the public and they complied because they were scared for their app(s) in the @AppStore

Whenever Apple claims the @AppStore is required for security keep in mind those “secret” stories where Apple managers threatened security companies to shut up because otherwise their apps in the @AppStore might get extra reviewed….

Stefan Esser:

Interesting in this PEGASUS research is also that we have been right: making persistence hard does not stop phone hacks instead it makes them even harder to find because less to no artifacts on the disk. Without introspection of the computers in our pockets we are doomed :P

Nikias Bassen:

This is the problem with Apple (and Google) locking out their users. It actually helps the bad actors since the user cannot see what is happening on the device, and after the fact you can’t even get a sample of the malware without a jailbreak.

Stefan Esser:

iOS attack have been ongoing for years. They were invisible because Apple denies introspection of iPhones. This is part of their marketing to claim iPhones are invulnerable compared to the competition. Then iOS exploitation capabilities slipped into the hands of NSO who are notorious for getting caught apparently. So finally the world learned that this is real. But only because one of the many players has been caught in the act. Since they were caught the first time the only other player that has been found was the campaign Google found. No other iOS drive by attacks or malware has ever been found. And no this is not because it doesn’t exist. It is because nobody can see it. Much to the joy of Apple management.

Dan Moren:

Tech Crunch’s Zack Whittaker linked to a tool that can help you check if your phone was compromised.

I downloaded and tried out the Mobile Verification Toolkit so you don’t have to and, well, it’s definitely not user friendly. I had to install some command line updates via Homebrew, which took a little bit of trial and error after the instructions proved to not be exactly correct for my system, then had to make a decrypted copy of my iPhone backup, plus had to make sure I’d downloaded the correct definitions file to compare it to.

How likely is it that the evidence would be included in a backup?

Simone Manganelli:

Huh?

Israeli spyware company NSO Group has said repeatedly that its surveillance tools do not work against smartphones based in the United States

Why would that matter for 0-click iMessage vulnerabilities?

Matthew Green:

Many attacks used “network injection” to redirect the victim to a malicious website. That technique requires some control of the local network, which makes it hard to deploy to remote users in other countries. A more worrying set of attacks appear to use Apple’s iMessage to perform “0-click” exploitation of iOS devices. Using this vector, NSO simply “throws” a targeted exploit payload at some Apple ID such as your phone number, and then sits back and waits for your zombie phone to contact its infrastructure.

[…]

Adding a firewall is the cheap solution to the problem, and this is probably why Apple chose this as their first line of defense. But actually closing this security hole is going to require a lot more. Apple will have to re-write most of the iMessage codebase in some memory-safe language, along with many system libraries that handle data parsing.

[…]

NSO can afford to maintain a 50,000 number target list because the exploits they use hit a particular “sweet spot” where the risk of losing an exploit chain — combined with the cost of developing new ones — is low enough that they can deploy them at scale. That’s why they’re willing to hand out exploitation to every idiot dictator — because right now they think they can keep the business going even if Amnesty International or CitizenLab occasionally catches them targeting some human rights lawyer.

See also:

Previously:

Update (2021-07-26): Nick Heer:

The reporting associated with the Pegasus Project has been enlightening so far, but not without its faults. The confusion about this list of phone numbers is one of those problems — and it is a big one. It undermines some otherwise excellent stories because it is not yet known why someone’s phone number would end up on this list. Clearly it is not random, but nor is it a list of individuals whose phones were all infected with Pegasus spyware.

See also: Wired, MacRumors, TidBITS.

Update (2021-07-30): John Gruber:

[Last] year Motherboard reporter Joseph Cox revealed that Facebook attempted to purchase the right to use Pegasus to spy on their own iOS users.

Update (2021-08-13): Spencer Dailey:

Apple’s customers deserve a high level of transparency from Apple on the whole NSO/Pegasus affair. For years, NSO (and other exploit vendors) have facilitated hacking iPhones, causing incalculable damage to individuals (many who are in jail or worse). Apple should finally show us they have gotten real about stopping this… starting first with a press conference, then paying 20x the amount for 0-day exploits (to break NSO’s business model), and then reallocating its engineering talent to focus more on squashing bugs in its multimedia parsing libraries. I don’t know – something beyond saying it’s “not a threat to the overwhelming majority of our users”.

1 Comment RSS · Twitter

I'd presume that it's not only NSO has those capabilities. We just know a lot about them because they operate in the open.

Leave a Comment