A study by Microsoft proves that the main reason we use digital personal assistants is to search for a quick fact. This is probably because complicated answers are unlikely to yield any relevant results, but you can’t go wrong with measurement conversions.
The degree to which this is true is dependent on the device, as demonstrated by a study by Perficientdigital.
The Digital Personal Assistants accuracy study tests the accuracy of answers of 4,999 queries against seven different personal assistant devices.
The contenders were Alexa, Echo Show, Cortana, Google Assistant on Google Home, Google Assistant on Google Home Hub, Google Assistant on a Smartphone and finally, Siri.
While the study shows Google Assistant on a smartphone is, again, the best at answering questions completely and accurately; Cortana took the lead in attempting to answer the most questions. Alexa also exhibited growth in the category of the Number of Questions Attempted.
As a general trend, accuracy dropped across all devices compared to the same study last year, but Siri is far in the lead in the category of “Number of Incorrect Responses”, with Echo Show the next least accurate.
Here’s a summary of comparisons between the leading digital personal assistants, based on the percentage of answers attempted and the percentage of them that are full and correctly answered.
Here are the other categories, as represented in the following tables.
Year-over-year study of attempted answers
Year-over-year study for the percentage of fully and correctly answered questions
Number of Incorrect Responses
Percentage of responses which feature third party snippets
Featured snippets are answers from a third party source, that are delivered by the digital personal assistants.
Funniest Personal Assistant
Alexa and Siri both tied for the most jokes.
The fact that the general trend demonstrates a decrease in accuracy all around suggests that current algorithms have reached their full potential, and new technology is required before they can progress.