Monday, May 29, 2023

Apple Intelligence

Daniel Jalkut:

People are justifiably less confident about Apple’s prospective plans in the area of artificial intelligence (AI), and particularly in the realm of large language models: the technology behind such imagination-captivating products as OpenAI’s ChatGPT, and GitHub Copilot (which itself uses another OpenAI language model).

I zeroed in on ChatGPT and Copilot because it’s easy to imagine the functionality of these services shining in the context of two important Apple products: Siri, and its Xcode developer tools. In fact, technology is advancing so quickly that the absence of something like ChatGPT and something like Copilot in these products seems likely to be viewed as major shortcoming in the near future, if it isn’t seen that way already.

[…]

Apple Intelligence won’t be as good as ChatGPT or GitHub Copilot, at least not to start with. But it will be Apple’s. They can frame the pros and cons however they see fit, working their typical marketing magic to make its shortcomings seem less important, if not downright advantageous.

It would seem that Apple is way behind, not only in terms of announced products, but also because, as large language models become commoditized, access to proprietary training data and integrations will become key. Apple does have some unique data such as iTunes and App Store reviews, but these seem less useful than what its competitors have. Xcode Cloud could potentially be a great data source, but it, rightly, is designed for privacy:

Source code is only accessed for builds and the ephemeral build environments are destroyed when your build completes.

On the other hand, perhaps we are not that far from fitting really useful, if not market-leading, models on device. Apple has great hardware to run them, which is already deployed. It could work offline and preserve your privacy. This could be easier and cheaper to scale up to large numbers of users than models running in data centers.

Previously:

Update (2023-12-21): Tim Bradshaw:

Apple’s latest research about running large language models on smartphones offers the clearest signal yet that the iPhone maker plans to catch up with its Silicon Valley rivals in generative artificial intelligence.

The paper, entitled “LLM in a Flash,” offers a “solution to a current computational bottleneck,” its researchers write.

Its approach “paves the way for effective inference of LLMs on devices with limited memory,” they said. Inference refers to how large language models, the large data repositories that power apps like ChatGPT, respond to users’ queries. Chatbots and LLMs normally run in vast data centers with much greater computing power than an iPhone.

Update (2024-06-07): Tim Hardwick:

Apple will announce its new AI feature set for Apple devices at WWDC on June 10, and Bloomberg’s Mark Gurman reports that it will be officially called “Apple Intelligence.”

[…]

Apple Intelligence is expected to handle basic AI tasks, and it will work mostly on-device. In other words, the model is powered by the device’s onboard processor, rather than in the cloud.

4 Comments RSS · Twitter · Mastodon


I would disagree that Apple is behind. Their focus is on running models on your device, and the NPU in Apple SoCs have been making impressive progress over the gnerations. The current generation has basically 1/10 the performance of a $36K nVidia H100.

While Apple has hidden its NPU capabilities behind its own frameworks, they have published research that shows they are working on LLM support by supporting the Transformer architecture on the NPU, and I would expect major further announcements at WWDC:

https://machinelearning.apple.com/research/neural-engine-transformers

The limiting factor in how complex the models can be is how much RAM is on the system. But open-source research projects have shown you can get impressive results with much smaller models than ChatGPT 3 by taking an existing model and tuning it for your specific application, and proprietary data seems to be a non-issue:

https://www.semianalysis.com/p/google-we-have-no-moat-and-neither


"This could be easier and cheaper to scale up to large numbers of users than models running in data centers."

OK, but which data would be used to build the model? Apple's own code?

Because if you are building a solution to help 3rd party developer code based only on the code that they have already written, it's pretty much useless.


I think there's an interesting intersection between LLMs and privacy: create local embeddings of people's private data.

Imagine Spotlight, except it actually answers you questions about your own data. When did I last write an email to John Snow? What was it about? Summarize everything my boss told me in the last week. Write an email back with a summary of this draft for a spec I wrote in Word.

This would both be a novel approach that nobody else is currently doing, and it would allow Apple to brag about how they aren't like all of these other data ingesting companies that just want people's information to train LLMs that eventually make these same people superfluous. Apple uses LLMs to empower you, not to replace you, etc.


Old Unix Geek

There's a non insignificant chance that countries will make LLM products difficult to sell, or even develop, since academic papers could be argued to be "open source", making their authors liable.

Perhaps Apple would be better off letting things settle first.

Leave a Comment