Google Assistant was announced two years ago at I/O and quickly made its way to Home smart speakers and Android devices. Just last month, the first Smart Displays were released with Google today detailing what it has learned about Assistant usage since the launch.

The post penned by Assistant engineering VP Scott Huffman is specifically focussed on “voice technology” and how conversational interfaces are now available in speakers, cars, computers, and phones. Given that wide availability, Google has noticed that Assistant usage is “largely driven by their environment and what they’re trying to accomplish in their daily routines.”

In the morning, we’ll use our smart speakers to ask for the weather or listen to the news. During lunch and on the commute home, we’ll text and call our friends, or look for local restaurants. When we get home, we want to listen to music. And at the end of the day, we get ready for tomorrow with tasks like “set an alarm,” “set a reminder,” or “ask the Assistant to tell me about tomorrow’s meetings.”

Productivity, weather, media, and news queries to Google Home all peak in the morning, while the latter two categories see a resurgence at night. Meanwhile, Android usage for communications and local increase as many users leave work and get ready for social activities.

In general, Assistant queries are “40 times more likely to be action-oriented than Search,” meaning that people want to accomplish tasks, like playing music or controlling the smart home, rather than getting information. Moving forward, Google wants to build more of these experiences, like the visual snapshot that Google introduced for Assistant on phones last month.

With the launch of Smart Displays and our new visual experience for phones, we’ve evolved the Google Assistant to become much more dynamic, spanning voice, screens, tapping and typing. And we’re seeing people respond—in fact, nearly half of interactions with Assistant today include both voice and touch input.

When users ask to adjust the temperature, Assistant will perform the command, but also surface a dial for users to further adjust, thus resulting in more mixed interactivity. 

As people use smart assistants more, Google notes how “expectations go up in terms of complex dialogue.” The company details how users have asked Assistant to set an alarm in 5,000 different ways. At I/O 2018, Google rolled out Multiple Actions with a single command and Continued Conversation so that users don’t have to repeat a hotword.

We might see “weather Chicago” typed in Search, whereas with the Assistant we see much longer and more conversational queries like “what’s the weather today in Chicago at 3pm.” On average, Assistant queries are 200 times more conversational than Search.

Google notes that the technology is being adopted across the spectrum, with many experiencing this first hand when exposing a smart speaker to both younger and older people.

There’s no user manual needed, and people of all ages, across all types of devices, and in many different geographies can use the Assistant. Because of this, we’re finding that Google Assistant users defy the early adopter stereotype—there’s a huge uptick in seniors and families, and women are the fastest growing user segment for the Assistant.

This is also the case for users coming online for the first time around the world, with Google highlighting how “voice is taking the forefront as the primary way they interact with their devices.” For instance, Assistant usage has tripled in India since early 2018.


Check out 9to5Google on YouTube for more news:

About the Author