Friday, July 27, 2018

Annual Digital Assistant IQ Test

Mitchel Broussard:

Five months after performing a test that put the smart speakers of multiple companies in the spotlight to determine how well they performed in various categories, Loup Ventures is back today with an IQ test focused entirely on digital AI assistants. To get the necessary results, the researchers asked Siri, Google Assistant, Alexa, and Cortana 800 questions each on a smartphone, and compared their findings to a previous AI test held in April 2017.

[…]

In total, Google Assistant answered 85.5 percent of the 800 questions asked correctly and understood all of them, compared to Siri’s 78.5 percent answered correctly and 11 misunderstood. Alexa correctly answered 61.4 percent and misunderstood 13, while Cortana was the “laggard” and correctly answered 52.4 percent and misunderstood 19.

They found that Siri has improved a lot. However, I don’t think that accuracy is necessarily the most important metric. The main problems I have with Siri are its (lack of) availability and slow speed.

2 Comments RSS · Twitter

Yes particularly on the watch which is now where I find myself using Siri most. It's extremely frustrating to say, "hey siri set timer for 10 minutes" only to have it time out.

"Google Assistant’s outperformance stems largely from the search function “featured snippets.” […] Where others may answer with, “here’s what came back from a search” and a list of links, Google is able to read you the answer. We confirm that each result is correct, but this offers a huge advantage in simple information queries (one of the most common voice computing activities)."

It would be interesting to know how much queries were answered with an answer instead of a search result.

Because, for those assistants with no displays (e.g. Speakers), a search result and a list of links is not a correct answer.

Leave a Comment