Why Does Siri Seem So Dumb?
Google Now, on the same Apple devices, using the same voice input, answered every one of these questions clearly and correctly. And that isn’t even Google’s latest digital helper, the new Google Assistant.
If you try most of these broken examples right now, they’ll work properly, because Apple fixed them after I tweeted screenshots of most of them in exasperation, and asked the company about them.
[…]
For instance, when I asked Siri on my Mac how long it would take me to get to work, it said it didn’t have my work address — even though the “me” contact card contains a work address and the same synced contact card on my iPhone allowed Siri to give me an answer.
Similarly, on my iPad, when I asked what my next appointment was, it said “Sorry, Walt, something’s wrong” — repeatedly, with slightly different wording, in multiple places on multiple days. But, using the same Apple calendar and data, Siri answered correctly on the iPhone.
Spend ten million (he said figuratively) and get the best QA staff in the business, and make sure there's no silo making QAing app interactions an issue. If Maps borks like this, the QA team “for Maps” has to be able to hold Siri and Contacts (or whatever else) accountable. No software ships until this blocking bug is fixed.
Indeed, Siri now knows the date and time of the next U.S. presidential debate, but where Siri fundamentally falls apart is its inability to maintain context and chain together multiple commands.
[…]
These sort of glaring inconsistencies are almost as bad as universal failures. The big problem Apple faces with Siri is that when people encounter these problems, they stop trying. It feels like you’re wasting your time, and makes you feel silly or even foolish for having tried. I worry that even if Apple improves Siri significantly, people will never know it because they won’t bother trying because they were burned so many times before. In addition to the engineering hurdles to actually make Siri much better, Apple also has to overcome a “boy who cried wolf” credibility problem.
I think the inconsistencies are worse than outright failure. The inability to answer a query implies a limitation which, while not ideal, is understandable. Inconsistency, on the other hand, makes Siri feel untrustworthy. If I can’t reliably expect the same result with basic queries that are almost identical, I’m much less likely to find it dependable.
I pointed out similar problems in a Macworld article in August. For me, Siri is a waste of time.
The only thing Siri consistently does correctly for me is set timers. I keep trying to use it to add reminders and am usually frustrated. Either the phone can’t connect to Siri, or it mis-parses what I said. It’s easier to use my finger to create a new action in OmniFocus and then to use the dictation button on the keyboard. iOS is pretty good at transcribing what I say. The problem is that interpreting it is unreliable. And that’s why I rarely even try to ask it more complicated questions.
See also: Daniel Jalkut on Siri logging, David Spark’s Dragon Professional review.
Update (2016-10-14): Stephen Hackett:
Siri should feel like a living, growing platform and it just doesn’t. Even SiriKit, which allows developers to build plugins for the service, doesn’t get Apple far enough down the road. This is a platform vendor problem, and not one a handful of apps can solve.
Update (2016-10-17): David Sparks:
Why does it take an article by a popular journalist to get these things fixed? I feel as if Siri needs more attention. I don’t think the underlying technology is as bad as most people think but it is these little failures that causes everyone to lose faith.
Update (2016-10-20): Nick Heer:
I think it’s important to keep bringing it up because I think Siri is currently fundamentally flawed in its design.
[…]
More worrying for me is that the user interface component of Siri — a field where Apple typically excels — simply isn’t good enough.
6 Comments RSS · Twitter
Unfortunately even setting the alarms in Siri does not do correctly. Try this:
Create an alarm for 9:43, repeating every Tuesday, and turn it off.
Now: "Siri, set an alarm for 9:43" - it simply turns on the 9:43 alarm which wont go off until Tuesday, not today or tomorrow as expected.
Peter, if I'm parsing your results correctly, were I a human assistant, I'd act exactly as Siri is doing in this particular case...
Chucky: I agree with Peter. If I say "an alarm", the indefinite article implies a different alarm, not the pre-existing one, so Siri should create a new one. If I say "Switch on the 9:43 alarm", and assuming I only have one set up, then yes, I should not expect it to go off until Tuesday.
I'm tired of the Siri bug that has persisted since iOS 8 (it was fine in iOS 7, IIRC) where if she misunderstands your command to create a reminder, and then you either manually correct it (or now with iOS 10) select one of her alternate suggestions, it STILL creates TWO reminders... the misinterpretation, and the correction. I have filed this bug on Bug Reporter, reported it via Apple's Feedback page, and even emailed Craig Federighi and Alex Acero about it (they both responded). Yet, here we are at iOS 10.1 and it still exists. It makes me wonder if Apple even cares anymore if they are letting simple bugs like this make it through the development process of THREE different versions of iOS and multiple bug reports and emails to SVPs... and still not fixing it.
Like why would you provided a mechanism to correct Siri's input, but then not actually honor it when the user chooses the corrected version?
[…] as I keep saying, Siri seems to be much better at turning speech into text than it is at acting on that text. […]