Archive for December 28, 2023

Thursday, December 28, 2023

Operation Triangulation Details

Dan Goodin (Hacker News):

Researchers on Wednesday presented intriguing new findings surrounding an attack that over four years backdoored dozens if not thousands of iPhones, many of which belonged to employees of Moscow-based security firm Kaspersky. Chief among the discoveries: the unknown attackers were able to achieve an unprecedented level of access by exploiting a vulnerability in an undocumented hardware feature that few if anyone outside of Apple and chip suppliers such as ARM Holdings knew of.


The mass backdooring campaign, which according to Russian officials also infected the iPhones of thousands of people working inside diplomatic missions and embassies in Russia, according to Russian government officials, came to light in June. Over a span of at least four years, Kaspersky said, the infections were delivered in iMessage texts that installed malware through a complex exploit chain without requiring the receiver to take any action.


With that, the devices were infected with full-featured spyware that, among other things, transmitted microphone recordings, photos, geolocation, and other sensitive data to attacker-controlled servers. Although infections didn’t survive a reboot, the unknown attackers kept their campaign alive simply by sending devices a new malicious iMessage text shortly after devices were restarted.

Boris Larin (video, Hacker News):

This presentation was also the first time we had publicly disclosed the details of all exploits and vulnerabilities that were used in the attack. We discover and analyze new exploits and attacks using these on a daily basis, and we have discovered and reported more than thirty in-the-wild zero-days in Adobe, Apple, Google, and Microsoft products, but this is definitely the most sophisticated attack chain we have ever seen.


Various peripheral devices available in the SoC may provide special hardware registers that can be used by the CPU to operate these devices. For this to work, these hardware registers are mapped to the memory accessible by the CPU and are known as “memory-mapped I/O (MMIO)”.


I discovered that most of the MMIOs used by the attackers to bypass the hardware-based kernel memory protection do not belong to any MMIO ranges defined in the device tree. The exploit targets Apple A12–A16 Bionic SoCs, targeting unknown MMIO blocks of registers that are located at the following addresses: 0x206040000, 0x206140000, and 0x206150000.


This is no ordinary vulnerability, and we have many unanswered questions. We do not know how the attackers learned to use this unknown hardware feature or what its original purpose was. Neither do we know if it was developed by Apple or it’s a third-party component like ARM CoreSight.

Bill Toulas:

The four flaws that constitute the highly sophisticated exploit chain and which worked on all iOS versions up to iOS 16.2 are:

  • CVE-2023-41990: A vulnerability in the ADJUST TrueType font instruction allowing remote code execution through a malicious iMessage attachment.
  • CVE-2023-32434: An integer overflow issue in XNU's memory mapping syscalls, granting attackers extensive read/write access to the device's physical memory.
  • CVE-2023-32435: Used in the Safari exploit to execute shellcode as part of the multi-stage attack.
  • CVE-2023-38606: A vulnerability using hardware MMIO registers to bypass the Page Protection Layer (PPL), overriding hardware-based security protections.

Nick Heer:

As you might recall, Russian intelligence officials claimed Apple assisted the NSA to build this malware — something which Apple has denied and, it should be noted, no proof has been provided for Apple’s involvement or the NSA’s. It does not appear there is any new evidence which would implicate Apple. But it is notable that it relied on an Apple-specific TrueType specification, and bypasses previously undisclosed hardware memory protections. To be clear, neither of those things increases the likelihood of Apple’s alleged involvement in my mind. It does show how disused or seemingly irrelevant functions remain vulnerable and can be used by sophisticated and likely state-affiliated attackers.


Update (2024-01-05): See also: Bruce Schneier.

The New York Times Sues OpenAI

Emma Roth (Hacker News):

The New York Times is suing OpenAI and Microsoft for copyright infringement, claiming the two companies built their AI models by “copying and using millions” of the publication’s articles and now “directly compete” with its content as a result.

As outlined in the lawsuit, the Times alleges OpenAI and Microsoft’s large language models (LLMs), which power ChatGPT and Copilot, “can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style.” This “undermine[s] and damage[s]” the Times’ relationship with readers, the outlet alleges, while also depriving it of “subscription, licensing, advertising, and affiliate revenue.”

John Timmer (Hacker News):

The Times is targeting various companies under the OpenAI umbrella, as well as Microsoft, an OpenAI partner that both uses it to power its Copilot service and helped provide the infrastructure for training the GPT Large Language Model. But the suit goes well beyond the use of copyrighted material in training, alleging that OpenAI-powered software will happily circumvent the Times’ paywall and ascribe hallucinated misinformation to the Times.


Part of the unauthorized use The Times alleges came during the training of various versions of GPT. Prior to GPT-3.5, information about the training dataset was made public. One of the sources used is a large collection of online material called “Common Crawl,” which the suit alleges contains information from 16 million unique records from sites published by The Times. That places the Times as the third most referenced source, behind Wikipedia and a database of US patents.

OpenAI no longer discloses as many details of the data used for training of recent GPT versions, but all indications are that full-text NY Times articles are still part of that process (Much more on that in a moment.) Expect access to training information to be a major issue during discovery if this case moves forward.

Benjamin Mullin and Tripp Mickle:

Apple has opened negotiations in recent weeks with major news and publishing organizations, seeking permission to use their material in the company’s development of generative artificial intelligence systems, according to four people familiar with the discussions.

The technology giant has floated multiyear deals worth at least $50 million to license the archives of news articles, said the people with knowledge of talks, who spoke on the condition of anonymity to discuss sensitive negotiations. The news organizations contacted by Apple include Condé Nast, publisher of Vogue and The New Yorker; NBC News; and IAC, which owns People, The Daily Beast and Better Homes and Gardens.


Update (2023-12-29): Jason Kint:

The complaint is a must-read imho, it’s the only way to understand the alleged violations and the extent as to which the systems have been designed and tuned in order to generate certain output.


So back to Exhibit J. Unlike the other 220k+ pages of exhibits documenting registered works, this exhibit contains 100 examples of alleged copyright violations with nearly identical content being outputted by ChatGPT. Again, it’s impossible to argue with this.

Here are four examples. Again, the lawsuit includes one hundred of them. You get the point. I find this exhibit to be an incredibly powerful illustration for a lawsuit that will go before a jury of Americans.

Update (2024-01-05): Gary Marcus (via Hacker News):

The crux of the Times lawsuit is that OpenAI’s chatbots are fully capable of reproducing text nearly verbatim[…]

The thing is, it is not just text. OpenAI’s image software (which we accessed through Bing) is perfectly capable of verbatim and near-verbatim repetition of sources as well.

Daniel Jeffries (via Hacker News):

The NY Times is asking that ALL LLMs trained on Times data be destroyed.

That includes GPT 3 and 4, Claude, Mistral, Llama/Llama 2 and pretty much any other model in existence.

Update (2024-01-09): Kate Downing (via Hacker News):

The complaint paints a picture of an honorable industry repeatedly pants-ed by the tech industry, which historically has only come to heel under enormous public pressure and the Herculean efforts of The Times to continue to survive. It’s interesting because US copyright law decisively rejects the idea that copyright protection is due for what is commonly referred to as “sweat of the brow.” In other words, the fact that it takes great effort or resources to compile certain information (like a phonebook), doesn’t entitle that work to any copyright protection – others may use it freely. And where there is copyrightable expression, the difficulty in creating it is irrelevant. So, is all this background aimed solely at supporting the unfair competition claim? Is it a quiet way of asking the court to ignore the “sweat of the brow” precedent, to the extent that it’s ultimately argued by the defendants, in favor of protecting the more sympathetic party? Maybe they’re truly concerned that the courts no longer recognize the value of journalism and need a history lesson? No other AI-related complaint has worked so hard to justify the very existence, needs, and frustrations of its plaintiffs.

Unless Microsoft and OpenAI hustle to strike a deal with the New York Times, this is definitely going to be the case to watch in the next year or two. Not only does it embody some of the strongest legal arguments related to copyright, it is likely to become a lightning rod for many interests who will use it to wage a proxy war on their behalf.

Update (2024-02-28): Blake Brittain (via Slashdot):

OpenAI said in a filing in Manhattan federal court, opens new tab on Monday that the Times caused the technology to reproduce its material through “deceptive prompts that blatantly violate OpenAI’s terms of use.”


“The truth, which will come out in the course of this case, is that the Times paid someone to hack OpenAI’s products.”

Crashing iPhones With a Flipper Zero

Dan Goodin (via Bruce Schneier):

To van der Ham’s surprise and chagrin, the same debilitating stream of pop-ups hit again on the afternoon commute home, not just against his iPhone but the iPhones of other passengers in the same train car. He then noticed that one of the same passengers nearby had also been present that morning. Van der Ham put two and two together and fingered the passenger as the culprit.


The culprit, it turned out, was using a Flipper Zero device to send Bluetooth pairing requests to all iPhones within radio range. This slim, lightweight device has been available since 2020, but in recent months, it has become much more visible. It acts as a Swiss Army knife for all kinds of wireless communications. It can interact with radio signals, including RFID, NFC, Bluetooth, Wi-Fi, or standard radio. People can use it to covertly change the channels of a TV at a bar, clone some hotel key cards, read the RFID chip implanted in pets, open and close some garage doors, and disrupt the normal use of iPhones.


Despite its multifaceted capabilities, the Flipper Zero seems best known in recent weeks for its iPhone DoSing capabilities. The way Bluetooth works on iPhones and iPads makes them especially susceptible. Van der Ham flashed his device with custom firmware called Flipper Xtreme, which he acquired on a Discord channel devoted to the Flipper Zero. One firmware setting sends a constant stream of messages announcing the availability of a BLE (Bluetooth low energy) device nearby. This constant stream can be annoying for users of any device, but it doesn’t crash phones. A separate setting, labeled “iOS 17 attack,” is the one the train prankster used.

Ric Ford:

Turning off Bluetooth is an unappealing workaround.

Juli Clover:

With the launch of iOS 17.2, Apple has fixed an exploit that allowed the Flipper Zero electronic multi-tool to lock up iPhones, reports ZDNET.

Jo DeVoe (via Hacker News):

“The preliminary investigation indicates that between 10:45 a.m. and 1:30 p.m. on November 29, a student inside Washington Liberty High School utilized an electronic device that caused nearby iPhones to turn off,” she said.


ACPD did not provide additional details, such as what kind of device might have been used, citing the need to preserve the integrity of the ongoing investigation. A cybersecurity expert contacted by ARLnow declined to speculate on how a student might have turned off nearby iPhones.