Apple Intelligence Announced
Apple (preview, Hacker News, MacRumors, 9to5Mac):
Writing Tools help users feel more confident in their writing. With Rewrite, Apple Intelligence allows users to choose from different versions of what they have written, adjusting the tone to suit the audience and task at hand. From finessing a cover letter, to adding humor and creativity to a party invitation, Rewrite helps deliver the right words to meet the occasion. Proofread checks grammar, word choice, and sentence structure while also suggesting edits — along with explanations of the edits — that users can review or quickly accept. With Summarize, users can select text and have it recapped in the form of a digestible paragraph, bulleted key points, a table, or a list.
[…]
In the Notes and Phone apps, users can now record, transcribe, and summarize audio.
[…]
Natural language can be used to search for specific photos, such as “Maya skateboarding in a tie-dye shirt,” or “Katie with stickers on her face.” Search in videos also becomes more powerful with the ability to find specific moments in clips so users can go right to the relevant segment. Additionally, the new Clean Up tool can identify and remove distracting objects in the background of a photo — without accidentally altering the subject.
[…]
A cornerstone of Apple Intelligence is on-device processing, and many of the models that power it run entirely on device. To run more complex requests that require more processing power, Private Cloud Compute extends the privacy and security of Apple devices into the cloud to unlock even more intelligence.
Apple today unveiled a new version of Siri, that it promises is more natural and helpful. The new Siri is powered by Apple Intelligence generative AI models.
[…]
Apple says the new Siri will understand context, so you don’t have to repeat information in subsequent requests.
[…]
Siri will have on-screen awareness about what you are currently looking at, and have the ability to take in-app actions.
[…]
App Intents will allow Siri to work deeply with first-party and third-party apps.
But will Siri be able to create a reminder with the literal text of what I said?
Siri will determine if queries may be useful to forward to ChatGPT, and asks the user for permission to share. This enables Siri to leverage ChatGPT’s image- and text-understanding capabilities with no need to jump between tools.
[…]
Siri will leverage GPT-4o for free, with no need to create an account. Requests are not logged and IP addresses are obscured.
has unveiled a new Image Playground feature that allows you to create generative images on the fly using a range of concepts like themes, costumes, accessories, places, and more.
You can type a description, and choose from Animation, Illustration, or Sketch, and Image Playground will create the image tailored to your preferences.
Apple shared a new feature that will enable you to create an emoji for any occasion. Apple calls this AI-powered feature Genmoji.
[…]
Since emojis are actually unicode characters that work cross-platform, Apple’s Genmoji won’t technically work the same way as other emoji, since if they were, they wouldn’t display properly on non-Apple devices. Instead, Apple creates Genmoji as images.
Meek Geek wonders whether the hardware requirements are artificial.
Previously:
- macOS 15 Sequoia Announced
- iPadOS 18 Announced
- iOS 18 Announced
- Siri Regressions in iOS 17
- Giving Up on Siri and HomePod
- Apple Intelligence
Update (2024-06-14): See also:
- OpenAI (Hacker News)
- iJustine: Craig Federighi & John Giannandrea (The Verge, 9to5Mac)
- Joe Rosensteel
- Ben Thompson
- M.G. Siegler
- Nick Heer
- Jason Snell
- Warner Crocker
- Slashdot
Here’s the thing. This all looks amazing. But, when it comes to knowing when to pick my mom up from the airport, I’m going to have to triple-check the results with the source data to be sure the AI isn’t just making stuff up.
I’m sure Apple has this covered better than other companies, but it’s going to be a long time before I blindly trust the results of so much disparate data.
Biggest takeaway from WWDC: everyone overestimated Tim Cook and underestimated Sam Altman. Apple I’m sure thinks this is a stopgap until they can swap in their own LLMs. But OpenAI is betting this is a stopgap until they can swap in their own phone. It remains to be seen who is right here, but I can tell you that OpenAI is getting way more out of being put in front of every Apple customer than Apple is getting from finally accurately telling you George Washington’s birthday or whatever.
Left unanswered on Monday: which company is paying the other as part of a tight collaboration that has potentially lasting monetary benefits for both. But, according to people briefed on the matter, the partnership isn’t expected to generate meaningful revenue for either party — at least at the outset.
I initially wrote off Apple’s integration with ChatGPT as an admission of defeat, that they couldn’t develop an LLM competitive with GPT-4o or Gemini or Claude despite having near infinite resources, powerful ML co-processors in their hardware lineup going back years and some very bright people.
But now I’m beginning to see that Apple’s strategy is actually kinda brilliant in unexpected ways.
[…]
Apple is letting the rest of the industry burn money and duke it out while providing a ton of value for their customers. This has echoes of its approach to integrate 3rd party search providers, with a 2024 AI craze twist.
I still have so many questions about ‘Apple Intelligence’ after yesterday. Does Siri just… not get better?… on anything below an iPhone 15 Pro? No improvement to the cloud-based Siri on older devices? No HomePods? Can we as developers not rely on an improved conversational, smart Siri across devices when building our new Siri features?
Lots of great AI things from Apple, as expected.
I still don’t know if Siri can set a fucking timer, get reliable directions from Siri in Maps, or ask Siri for a specific song/band to be played in Apple Music.
Previously:
Update (2024-06-19): Nathan Lambert (via Hacker News):
Apple’s presentation rang very different than most AI keynotes we’ve seen in the last few years. While OpenAI and Google are trying to prove that they are the best at AI, Apple leaned into a narrative of what else we can do with AI. Apple’s large suite of new AI features coming this fall across all their devices, enabling automation, information retrieval, and generation in a privacy-conscious way will be the first time that many people meaningfully interact with AI.
[…]
Apple has done a ton of things to put all of this together on their devices. They figured out how to train great models that use just the right amount of memory with quantization, how to train many adapters that work with different apps or styles, how to get fast latency speeds, and much more they didn’t talk about. This is very serious ML system engineering of a different flavor than large models and large request count handling.
As far as I can tell, Apple Intelligence won’t be treading on anyone’s lawn. If you don’t want to use it, just ignore it, like all the other features that aren’t relevant to how you prefer to use technology. But I have talked with people who find Apple Intelligence some of the more exciting work Apple has done on the software side in years.
Francisco Doménech (via Hacker News):
The new Apple Intelligence system and the expected deep revamp of Siri — coming in the fall, and in testing phase, with the new iOS 18 operating system — will sideline well over 90% of current iPhone users, if they don’t buy a new smartphone.
The iPhone 15 Pro models use the A17 Pro chip, which has a 16-core Neural Engine that’s up to 2x faster than the A16 chip found in the iPhone 15 and iPhone 15 Plus, performing nearly 35 trillion operations per second. Federighi hinted that RAM is also another aspect of the system that the new AI features require, so it is perhaps no coincidence that all the devices compatible with Apple Intelligence have at least 8GB of RAM.
Even Apple CEO Tim Cook isn’t sure the company can fully stop AI hallucinations. In an interview with The Washington Post, Cook said he would “never claim” that its new Apple Intelligence system won’t generate false or misleading information with 100 percent confidence.
What else did they expect him to say?
Apple is not expected to introduce its most significant Apple Intelligence features in September when iOS 18 sees a public release. Instead, many will come alongside a Siri overhaul in a future iOS 18 update that’s set to be introduced in 2025.
Apple currently plans to ship the new Siri UI design this fall, but the most significant upgrades to Siri’s intelligence won’t launch until at least Q1 2025. Honestly, if I was them, I’d hold out until it was all there. One chance to make a first impression.
Previously:
Update (2024-06-24): Benjamin Mayo:
Everything Apple Intelligence does, we’ve seen before.
However, what makes it profound is the intentionality of the design, and the way in which these features are being realised. The marketing is straightforward and easy for people to understand, and the features are integrated naturally into the operating system surfaces that people already use. In fact, most of the ‘new’ features are things that the OS already ostensibly does; things like text editing and manipulation, notification management, smart replies, transcriptions, and — yes — emojis. Apple isn’t trying to convince people on wholesale new dimensions of what a phone is capable of. It’s taking what users already do, but made better by using modern AI techniques, so that users can extract more value out the other end.
[…]
I am personally looking forward to all the new Siri improvements, although it remains a little murky as to exactly what will get better. The semantic index stuff isn’t shipping until next year, and it doesn’t seem to cover everything.
[…]
Perhaps my biggest disappointment of the entire endeavour is there is no indication as to how any of this could conceivably come to products like the Watch or HomePod, Apple’s most voice-oriented devices.
One question I’ve been asked repeatedly is why devices that don’t qualify for Apple Intelligence can’t just do everything via Private Cloud Compute. Everyone understands that if a device isn’t fast or powerful enough for on-device processing, that’s that. But why can’t older iPhones (or in the case of the non-pro iPhones 15, new iPhones with two-year-old chips) simply use Private Cloud Compute for everything? From what I gather, that just isn’t how Apple Intelligence is designed to work. The models that run on-device are entirely different models than the ones that run in the cloud, and one of those on-device models is the heuristic that determines which tasks can execute with on-device processing and which require Private Cloud Compute or ChatGPT. But, see also the previous item in this list — surely Apple has scaling concerns as well.
[…]
VisionOS 2 is not getting any Apple Intelligence features, despite the fact that the Vision Pro has an M2 chip. One reason is that VisionOS remains a dripping-wet new platform — Apple is still busy building the fundamentals, like rearranging and organizing apps in the Home view. VisionOS 2 isn’t even getting features like Math Notes, which, as I mentioned above, isn’t even under the Apple Intelligence umbrella. But another reason is that, according to well-informed little birdies, Vision Pro is already making significant use of the M2’s Neural Engine to supplement the R1 chip for real-time processing purposes — occlusion and object detection, things like that.
Update (2024-07-02): Steve Troughton-Smith:
If the point of Apple Intelligence isn’t to make Siri ‘not shit’, why are we even doing this?
Apple has an entire product line of Siri devices to put in your home that are laughably behind everything else on the market, and an embarrassment to the brand.
Non-intelligent Siri is also going to be the experience for most devices running iOS 18 and co this year.
Why isn’t this priority no. 1?
28 Comments RSS · Twitter · Mastodon
Wow. It's disgusting to see Apple fall in the generative AI train this way. Really disrespectful to artists and writers, not to mention the fact that this is going to cause massive headaches for teachers and professors
Yikes
I was hoping that "privacy" would have meant they would run the LLM locally. But no, Apple's definition of privacy is don't trust other corporations running stuff on their servers, but trust us. We promise you, and third parties will vouch, that your data is safe with us.
I'm sure that's fine for everyone who already drinks the "trust Apple" cool-aid. But since governments can force any corporation to silently snitch on its users, trusting anything over which one has no control is already letting the cat out of the bag.
This is currently just a big branding exercise. But given the way they're calling it "Apple Intelligence", they won't be able to give up on it, if OpenAI starts making their lives difficult. This is the first time in a long time that Apple has given up its control over a core technology. They dumped Nvidia and Intel over their lack of control. Now they've given control to OpenAI, with respect to a technology that might well matter more than they do in the long run. Reminds me of Microsoft and IBM...
And as Manx says, their original user base might not be best pleased with this move.
Siri is now officially known as Sam, Scraping All your Memories.
Siri is now just a wrapper around OpenAI.
Oh, look, Apple already forwards data from apps on your phone to the government. Oh, and they didn't tell us because the government told them not to. But, "trust us", "we're different".
Apple says you have the right to opt out of your website being used to train their LLM, but when you go to the relevant page, there doesn't seem to be any LLM specific commands -- so it seems that you're also opting out of your webpage being found using their search.
So far Siri had big problems when I mixed German and English. I wonder if this will get better.
If there is an option to use ChatGPT directly why is the hardware minimum a T2 chip for Intel which is only security related?
Yeah, emojis everywhere. Why call them emojis when they are just images of something? I can already create AI icons in Logoist - not very good ones, though.
I too was surprised at Apple limiting these features to the A17 Pro iPhones, despite how much they have been talking up the NPUs on their chips for so long.
My guess is it's a RAM constraint rather than an older-model NPU constraint: 8GB system RAM is probably the absolute minimum for running these generative models. If only Apple weren't so stingy with RAM...
Also, I wonder if we're about to see even more swapping and performance issues on the already-constrained 8GB Macs.
@OUG Yup. My thought too, in the end it's just not private if it's going to Apple. I doubt there is any interest over there in giving control over that or any other connections any time soon either. This is just how it works in the Appleverse now: "Privacy" means "Privacy from anyone but Apple." I also have serious doubts about the quality of the research that will be possible if Apple gates access, because notwithstanding the dangers of security by obscurity, the kind of money that will buy exploits is also very well placed to do that research just fine without Apple. It's all very sad.
@Beatrix Willius Yes, as a T2 iMac owner, I do hope you can still use ChatGPT even without the Apple Intelligence features requiring their models and Silicon. Otherwise they'd just be gating access to a UI, which is certainly plausible for Apple, but it would be very shitty.
"This is the first time in a long time that Apple has given up its control over a core technology"
Apple is working on its own LLMs. It's just that as of right now, the only models that are remotely reliable are OpenAI's. They're going to do an Apple Maps on this as soon as they think people will accept their inferior proprietary option.
I'm curious whether textual input can be used on macOS.
Because the microphone is such a poor input when working in open spaces at offices.
ChatGPT is optional, and every command sent to ChatGPT will have to be manually allowed.
First it will try the built-in LLM, and the Apple's LLM on Apple's server, and if you want you can send things to ChatGPT or other things.
It's the RAM that's the bottleneck. And if you have been following the developments in the LLM space, it has been clear for years, that that would be the case.
My prediction is, that the next flagship will have even more RAM to support future even larger models on-device.
At first, AI seemed great, but then it was indeed awkward to see the other company's system so deeply integrated into Apple's ecosystem. I wondered if Apple had acquired OpenAI or if they were planning to do so... Or maybe they will add more services in the future, such as Gemini.
I think it must be the RAM limitation:
Xcode 16 includes predictive code completion, powered by a machine learning model specifically trained for Swift and Apple SDKs. Predictive code completion requires a Mac with Apple silicon and 16GB of unified memory, running macOS 15. (116310768)
Yes models are big, but there is something very odd about Apple's strategy: spend a lot of silicon on NPUs but not enough on commodity RAM. You could argue that they weren't sure that LLM's would take off, but then why bother spend the money to design and ship NPUs. Even if you're pad limited, you could use the area to boost cache. Adding RAM on the other hand is cheap, and would be valuable for many other uses, even if LLM's didn't take off.
Alternatively, you could speed up the Flash path, and stream chunks of the model into RAM: embeddings are propagated through a stack of transformers, and although there's a 50-100x latency to access Flash, the access pattern is constant and known and you can request it transferred to RAM beforehand using double-buffering. If Flash is still too slow, you can add a large cache to the Flash controller if you feel HBM ram is too expensive.
So basically if you really care about privacy, there are at least 2 ways to attack the problem to make a relatively large model work on device. It might be a bit slower than you'd like, but you can maintain your privacy selling point. So I don't find the "It's the RAM, you ignoramus, even morons could see that coming decades ago!" argument particularly convincing for a company that owns the whole stack from the hardware up.
Yes, Apple is making its own LLM on device and off (and they're using TPUs from Google and GPUs to train it, not Apple Silicon, just saying). We'll see whether it ends up replacing ChatGPT though. Sounds to me like their AI department is currently seriously outgunned.
I also find it odd that they go with OpenAI, given its tight partnership with Microsoft. You'd have thought that Anthropic might be a more likely choice. That suggests to me that perhaps that OpenAI showed more impressive demos of future products than their competitors did.
@OUG
> I also find it odd that they go with OpenAI, given its tight partnership with Microsoft. You'd have thought that Anthropic might be a more likely choice. That suggests to me that perhaps that OpenAI showed more impressive demos of future products than their competitors did.
I think most likely the latter, as you touched upon, their own AI seems to seriously lag behind. Should they pick a seemingly inferior model, the gap between the competitors would widen. So I think they are desperate right now, until they get their own act together in the AI department, which I have an impression of being in a kind of panic mode.
@old Unix geek the support doc you linked does say you can opt out of just AI training and still be indexed for search.
“You can add a rule in robots.txt to disallow Applebot-Extended, as follows:
User-agent: Applebot-Extended
Disallow: /private/
Applebot-Extended does not crawl webpages. Webpages that disallow Applebot-Extended can still be included in search results. Applebot-Extended is only used to determine how to use the data crawled by the Applebot user agent.
Allowing Applebot-Extended will help improve the capabilities and quality of Apple’s generative AI models over time.”
Here is a good thread from Matthew Green about Private Cloud.
https://ioc.exchange/@matthew_d_green/112597849837858606
The improved image search feature is tempting but otherwise I have no idea why someone would want any of these features, or be willing to contribute to the intellectual theft and environmental damage this will do. How does this square with Apple’s environmental goals? Do they not count what they farm out to Open AI as contributing to their carbon footprint? Even without counting Open AI their own server infrastructure will go up significantly.
I'll admit that I have very little knowledge of the nuts-and-bolts of how LLMs work. Just from imaging the amount of data involved, it seems like a "general knowledge" LLM would be too big to house entirely on-device. If they really are that big, then it might not matter if your device has 8GB or 80GB of RAM.
Also, I think it was a *brilliant*, if somewhat obvious, move for Apple to call their artificial intelligence system Apple Intelligence. Apple can now use Apple Intelligence and AI interchangeably if they like, while competitors may feel compelled to say "artificial intelligence" (spelled out) so they don't have people thinking about Apple (even subconsciously) when they say AI.
I found it a bit short-sighted to put their name on something that will do stupid stuff.
When Gemini fucks up there is no quick way to make a catchy headline about Goggles lack of intelligence.
With Apple they will literally write themselves.
@Kristoffer: Maybe. But Apple is doing a couple of smart things with this. First, they've already said that it's all going to be labeled "Beta" when it first comes out, even in the fall. That gives them a little wiggle room to get things right. Second, they're putting gates around the OpenAI part by asking permission to engage with it. Sure, from a privacy standpoint it lets the user know what's going on with their data. But it's also a demarcation of functionality, clearly showing the user that they're stepping outside the walled garden.
To be honest, I don't think the "gates around OpenAI" will make as much difference as you think, because I don't expect many users to understand what OpenAI or ChatGPT is. They'll think of it all as Apple's product because it's integrated into the OS and the Apps, and it came with your phone. Literally, if you write Apps for a living, you hear people telling you that your App should be free because the phone cost so much, and why should anyone have to pay for apps on top of that cost? Aren't you being paid by Apple to make the App for the phone? You're just greedy and double-dipping.
Yes there will be an alert box telling them. And you know what, people don't read alert boxes, even if you tell them to so that you can help fix something. To quote a regular conversation I have with my relatives: "It made an error. Can you fix it?" "What error?" "I don't know, I didn't read it, it was too long and didn't make any sense". "But I've told you I can't fix things if I don't know what is wrong!" "How am I supposed to remember all that gobbledygook?"
FWIW, Mira Murati says that OpenAI's stuff in the labs isn't much more advanced than what users get to use.
OpenAI just appointed an retired US army general, who also ran the NSA and us cyber command until February this year to its board. Why would someone like that end up on the board? My guess: to sell services to the spooks, military & cyber-defense people.
It's somewhat ironic. Geoffrey Hinton once said in an interview that he was counting on his student Ilya Sutskever to ensure advanced AI would not end up being used by the military, but then Ilya left/was pushed out of OpenAI. Geoffrey believed so strongly in his anti-war position that he moved from the US to Canada to avoid supporting the US military adventure in Vietnam by paying his taxes, IIRC. So yeah. He must be happy.