Skip to main content

Comment: With Assistant Snapshot going away, Google Now’s radical vision is dead for a generation 

It’s been clear for some time now that Assistant Snapshot “does not have the same level of vision, central on-phone placement, or wide backing” as Google Now. It was never going to be game changing, but its upcoming demise officially closes the chapter on what could have been a radically different way to use smartphones.

As I opined last year, Google Now in 2012 was the foundation of something different – breaking up the siloing of data in mobile applications and instead placing it in a card-based feed. Rather than a user having to learn different app interfaces and workflows to access their data and other information, they could go to a centralized place on their phone and see what was coming next in their day.

It was a radical idea that Google was positioning to be the foundation of next-generation wearables like Android Wear and Glass. 

Google abandoned this vision of Now around 2016 but resurrected a mere fragment of it two years later into something that was later branded as Assistant Snapshot. Now, even that is going away.

At the end of the day, having a centralized feed of user data was too radical a proposition. The end goal would have replaced the hierarchy of apps, and the technology was/is not available yet. Its demise was also due to Google’s tendency to abandon ideas rather than persist.

Long live app silos 

It’s not clear whether end users even want an interface/experience that isn’t a grid of icons. People are already familiar with the concept of opening an app to accomplish a particular, often solitary, task. Tapping a logo and knowing what you’re in for is dead simple and now the universally ingrained way to use a mobile computer, i.e. “There’s an app for that.”

But imagine a device where the homescreen is not an app grid, but rather a feed of pure information. It would be populated with cards that are not too different than today’s widgets, but much more interactive and capable.

You could just scroll through weather, calendar events, reminders, your inbox, and messages. Meanwhile, traffic alerts and bus schedules would appear depending on your location, while payment and transit cards could appear upon entering stores and transit stops. Cards would display information you want instantaneously and directly because your device knows what you are interested in and need.

In 2013, Matias Duarte described that vision Google co-founder Larry Page had:

When Google Now began, it was really this mandate and this vision from Larry that Google could be better than instant. That Google could be almost psychic.

Matias Duarte

That said, people could find that approach too overwhelming and lacking direct controls. Additionally, there would certainly be uproar from app developers giving up control over the experience and direct relationship with end users. People would obviously still open apps in that future vision to see more, but Google would act as intermediary and curator.

Context is king

Having relevant information appear automatically throughout the day requires immense contextual awareness of what the user is doing. A lot of this is achieved by location tracking and being aware of daily patterns. Both things have become less energy intensive and more privacy-conscious (on-device) in the last decade, but it remains to be seen whether an algorithm can be more accurate and faster than having users deliberately place app icons to what they want.

For example, on the speed front, a consolidated feed needs to refresh constantly. You can have users pull down to get the latest information, but overtime that would be tedious and ideally you have the newest information load immediately on device unlock/wake. 

In terms of accuracy, feeds have to be ranked, but it’s unclear whether today’s algorithm can always surface what’s relevant near the top of the screen. If you have to do too much scrolling, again, is that faster than an icon on the homescreen whose position the user has down to muscle memory?

Ultimately, perfect contextual awareness to inform your device’s feed might not come until augmented reality where a live camera – with permission – can very explicitly know what you might need help with before you even do anything.

Google being Google

Speaking of radical ideas, Assistant Snapshot was not by any stretch of the imagination innovative. Nobody, even at launch, thought it was going to be revolutionary. Snapshot only replicated Google Now’s idea of being able to see relevant information in one place.

A fully-realized feed, for starters, needs buy-in from the entire company. Various product teams would have to be fine with people just interacting with widgets that don’t take up the entire screen. Essentially, their product is no longer an app, but rather just a backend service that slots into an algorithmic feed. If Google were to pursue this idea again, it has a wide range of first-party offerings that could be directed to go this route, but getting third-party developer adoption would be an entirely different beast. Assistant Snapshot had some of the former with Calendar, reminders, and Gmail alerts, but it simply wasn’t encompassing as Google Now.  

Next on the list is the feed replacing the homescreen/app grid. In the beginning, the feed could start off as Google Now did – one swipe on the home button away, but eventually it would be the first thing you see after unlocking your phone once its accuracy was foolproof.

It would also have to be the only feed on a device. You could integrate Discover web stories into the new feed, but having multiple feeds is just confusing. The same can be said of the notifications shade. Ideally, alerts would be integrated and surfaced directly.

Paradise lost 

By all accounts, Google is done with the idea of a centralized feed. There will no longer be one place to access personal information, with Google explicitly telling Snapshot users to visit apps like Calendar and Gmail directly. Pixel owners might increasingly be able to use At a Glance on their home and lockscreen as a dedicated surface for contextual information, but it’s far from an encompassing experience. 

Ultimately, Google was too early to the feed concept. When Now started in 2012, the technology wasn’t ready, and the company did not want to do anything too radically different from competitors in the early days of the smartphone to risk alienating users and developers. 

Maybe that metaphor will be the basis of the next form factor, i.e. smart glasses. However, at this point, smartphones will likely work the same way for the foreseeable future with a grid front and center rather than contextually-aware and direct information.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: abner@9to5g.com