Virtual assistants aren’t really a new technology, but over the past couple of years, they’ve become more and more important to our daily lives. Amazon had a big head start with its Alexa platform, but Google quickly caught up with its own Assistant. Now, a study is confirming what we already knew – Google’s is better.

If you’ve ever used Assistant up against Alexa, you’ve probably found that Google’s option tends to be a bit more accurate. This week, two studies were released from Stone Temple Consulting and ROAST (via SearchEngineLand), giving us a bit of insight as to how these two options differ.

First, there’s the report from Stone Temple, a follow-up to a similar study from last year. This study compares various assistants with over 5,000 different queries, scoring them based on how many questions they would answer, as well as the accuracy.

Here, Google Assistant pulled ahead of all competition. At its best, Google was able to answer over 90% of the questions, with an accuracy rate just shy of 80%. Interestingly, though, those numbers drop when you switch from Assistant on your phone to that of Google Home. On Home, the study found that it could only answer about 85% of questions, while having an accuracy rate of about 65%.

However, when you compare that to the competition, that’s still pretty impressive. Microsoft’s Cortana surprisingly comes in second place, just a couple of points behind Google Home (although with the ability to answer more of the 5,000 questions).

Amazon’s Alexa sees a huge improvement between 2017 and 2018 studies, though. The updated study found that Alexa was able to answer over 80% of questions posed, but it was only able to answer around 50% of those previously.

In case you were curious, Apple’s Siri fell in last place here, barely answering 80% of questions, with just 40% of those answers being accurate.

Along with the Stone Temple study, ROAST also put Google Assistant to the test. This study was exclusively about the Assistant, and involved 10,000 questions in total. The questions were split up into 22 verticals, including the likes of hotels, restaurants, education, and travel information. In this study, Google Assistant was able to answer about 45% of questions posed.

Interestingly, the study found that Google Assistant didn’t necessarily always turn to the Feature Snippet for results. As the study points out:

One of the key observations we found is that the Google Assistant result didn’t always match the result found on a web search featured snippet answer box. Sometimes the assistant didn’t read out a result (even if a featured snippet answer box existed) and we also had instances of the assistant reading out a result from a different website than the one listed in the featured snippet answer box.

The results of the study can be seen below, with a legend for reading the data also below.

Check out 9to5Google on YouTube for more news:

FTC: We use income earning auto affiliate links. More.

Check out 9to5Google on YouTube for more news:

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

About the Author

Ben Schoon

Ben is a writer and video producer for 9to5Google.

Find him on Twitter @NexusBen. Send tips to or encrypted to