Tuesday, May 14, 2024

Revamping Siri With iOS 18

Tripp Mickle, Brian X. Chen, and Cade Metz (MacRumors, Slashdot):

Apple’s top software executives decided early last year that Siri, the company’s virtual assistant, needed a brain transplant.

The decision came after the executives Craig Federighi and John Giannandrea spent weeks testing OpenAI’s new chatbot, ChatGPT. The product’s use of generative artificial intelligence, which can write poetry, create computer code and answer complex questions, made Siri look antiquated, said two people familiar with the company’s work, who didn’t have permission to speak publicly.


Apple is expected to show off its A.I. work at its annual developers conference on June 10 when it releases an improved Siri that is more conversational and versatile, according to three people familiar with the company’s work, who didn’t have permission to speak publicly. Siri’s underlying technology will include a new generative A.I. system that will allow it to chat rather than respond to questions one at a time.

The update to Siri is at the forefront of a broader effort to embrace generative A.I. across Apple’s business. The company is also increasing the memory in this year’s iPhones to support its new Siri capabilities.

John Gruber:

I don’t think there’s a single sentence of news in the entire thing.

I think the timeline for recognizing and incorporating generative AI is new.

Dave Mark:

Amazing to me that it took 3 wks of ChatGPT to convince Apple that Siri was “antiquated”.

Whole bunch of folks have been screaming this from the rooftops for years. 😐

SiriVid (2010):

Siri is a virtual personal assistant on your phone. You ask Siri in your own voice, and it helps you get things done when you're on the go. This video shows a demo of Siri helping plan a romantic evening, get tickets to a great movie, discover cool things to do on the weekend, and getting back home.

Via Adam Overholtzer:

Apple never delivered on the promise of Siri.

To be clear: the stuff in this Siri demo, which is from before they were acquired by Apple, never really worked. But Apple chose to abandon this vision and here we are 14 years later, with a Siri that still lacks many of the features shown in the video.

I maintain that the real problem with Siri is that the basics don’t work well. The purported the focus on conversation and generative AI gives the impression that they still don’t get this.


5 Comments RSS · Twitter · Mastodon

My issues with Siri, at least a third of the time, stem from its speech recognition. Anything more complicated than timers and alarms, it hears the wrong thing chronically. I can talk clearly in a quiet room, I can yell, and it’ll completely mishear me anyway. An LLM is a potential solution to some of its problems once you get past this first hurdle, but it’s this hurdle that has put me off using Siri unless I have no other choice.

I agree. My problems with Siri are mostly that it has issues with the basic, very trivial things I need it for. Sometimes by misunderstanding, sometimes because of some kind of issue that tells me to try again later

It would be cool if Siri could do more complex stuff with multiple steps, but I don’t really want to have a conversation with it

That seems like a hollow feature, pretty much like ChatGPT in general. Very impressive on the surface but when you find out how it works, it’s not nearly as useful as it seems like it would be

I also don’t want anything by OpenAI on my phone. It’s an evil theft-based company

I don’t think the issues I experience with Siri are due to it being based in an Intent model rather than an LLM. Only rarely do I have it trigger the wrong intent. More often, it’s a problem with the speech to text model. Other common issues are the wrong device triggering. The other day I couldn't stop my Apple Watch from responding when I was trying to play music from my iPhone. My HomePod in the next room will often chime in when I try and ask my phone something and the doors are open upstairs. The entity recognition and matching in Apple Music is terrible when there’s more than one version of an album (I want the 2016 remix not the 2001 remaster!), or the artist name isn’t made up of standard English words. I’m not sure how these issues are solved by an LLM (maybe STT could be improved), and the current Siri implementation at least does not ever hallucinate.

The news to me in this article is that it took until the release of ChatGPT for Apple to seriously consider using LLMs. If the reporting is accurate, early 2023 is quite late for them to have woken up to the threat & opportunity of LLMs. GPT-3 was released long before ChatGPT, and was significant for folks working in the AI field. Surely there were people high up at Apple aware of it, playing around with it, and considering the future evolution of Siri.

Maybe the article is a significant simplification of the timeline, or doesn't have the full picture.

Also, as a counter to Dave Mark's comment, I doubt Apple was ignorant of Siri being antiquated. The challenge is thoroughly evaluating the idea of improving or replacing it with an LLM. Given the large surface area of the product (or really, the collection of products that make up Siri), that's not a snap decision.

One more thought: no matter what, Apple is undoubtedly late to the party here. There is an alternative history in which Apple pushed Siri forward by continuously investing in and testing the boundaries of AI. In this world, Apple would have announced a seemingly magical upgrade to Siri before the rest of the world had ever heard of ChatGPT.

So yes, while it's still early days for generative AI, Apple certainly took their foot off the gas and Microsoft/OpenAI ate their lunch. Apple being late may not matter materially in this case, but I prefer the Apple that pushes ahead of the competition to the Apple that plays catch up.

Leave a Comment