At I/O 2016 and 2017, Sundar Pichai said Google was moving from “mobile first to AI first.” This involved “rethinking all [its] products and applying machine learning,” with search in Photos and Smart Reply being early examples of that. However, for much of last year, Google was considered to be on the back foot with applying large language models, and this AI-first proclamation was largely forgotten.
Historically, I/O is where Google makes its biggest announcements of the year. That being said, the AI race has forced the company to make more frequent announcements in the past year, not necessarily at the pace it otherwise would have. With I/O 2024 a few days away, here’s what we expect Google to announce.
Last year, the stakes for I/O were elevated, with many questioning Google’s ability to respond to AI competition. Since then, we’ve had the underlying Gemini 1.0 and 1.5 family models, as well as various Gemini-branded capabilities in Google’s biggest apps. The company is in a better place than it was 12 months ago, but more has to be done to lead the field.
To really be the AI-first company, Google needs to release features that are both transformational and widely available, especially to free users.
Apps
The biggest advantage Google has in the AI landscape is the hundreds of millions of users it has across services that are key to everyday life. While there are so many new AI apps popping up for early adopters to try, Google has the ability to be most people’s introduction to AI-powered tools by putting those features in what they’re already using.
That brings us to Gmail and Docs/Sheets/Slides. Today, Google offers Help me write, organize, and visualize across those applications. Those generative AI features are pretty straightforward and do address common tasks that people need to get done everyday. In terms of what’s next, Google previewed the side panel last I/O and it’s been in testing over recent months.
I’d argue all that is a bit catered towards productivity, and I want to see more personal use cases, especially in Gmail and Calendar, of AI helping other aspects of people’s lives.
Other key apps people use are Google Maps and Search. The Search Generative Experience (SGE) was announced a year ago and I wonder if Google has deemed it mature enough to exit the Labs preview program. I can see the utility of directly providing an answer rather than requiring people to sift through links. At the same time, the ramifications of that approach for publishers is — to say the least — profound.
Google Maps has been testing generative AI search to let you find places in a conversational manner. Compared to web search, I feel like conversational search excels in a more limited domain.
Then there’s Gemini, the application and website. On mobile, functionality is lacking and it’s not really a good phone assistant, at least not to nearly the extent of the long-established Google Assistant. It certainly seems like upcoming updates will finally address this.
Chrome is the other huge application that people use. It just added a Gemini shortcut in the address bar that’s pretty handy. Historically, we don’t really see the browser discussed at I/O. I’m curious about Google’s full vision of what AI looks like in Chrome. So far, it has introduced Help me write, a tab organizer, and theme generator in recent months, but there’s nothing particularly transformational just yet, especially compared to the competition that has leaned into AI.
Platforms
Then there’s the platform people are using these apps on. Google detailing what’s coming to Android 15 is a given. How big those updates are is a separate question after last year’s surprisingly limited stage presence and quieter public release.
The primary use of generative AI in Android today is Gemini Nano powering Gboard Smart Replies, Messages Magic Compose, and Recorder summarization. On-device AI will be important for helping keep cloud costs down, and important for making Android the place to build gen AI mobile applications.
That said, save for wallpapers, we haven’t really seen AI applied to the user experience. Rethinking an OS for AI is a big question. For example, what does a smarter homescreen, lockscreen, or notification shade look like? What can you do with macros, Shortcuts in iOS or the third-party Tasker on Android, and how could Google replicate and automate that with AI?
Comparisons will be made when Apple launches iOS 18 later this year, which is expected to have a slew of on-device AI features. Of course, the big difference is how Android and its apps can be updated more easily than Apple’s monolithic OS.
On the platform front, there’s Wear OS 5 and (probably) the long-rumored Android XR. Meta is working to make Horizon OS available to other headset makers, with Google’s Samsung partnership still toiling away in the background.
Hardware
Moving up another layer is the hardware running those OS and apps. Google unexpectedly announced the Pixel 8a, as well as the standalone Pixel Tablet, a week before I/O. Officially, that was to give the 8a its own moment. The mid-ranger would have been lost in the fray of other I/O news.
The question now becomes whether Google will tease the Pixel 9 series at I/O 2024. This happened for the Pixel 7 but not the Pixel 8. One argument for this not happening is that the Pixel 8 and 8 Pro are still solid phones with some runaway. For example, we’re still waiting for Zoom Enhance to launch and it would look odd if Google announced the next phone in any capacity before completing the Pixel 8’s feature set. More anecdotally, in the US, Google is still heavily advertising the 8 series as part of the NBA Playoffs.
What I could see Google teasing is the rumored “Pixie” AI assistant. The precedent here would be how Google showed off the new Google Assistant at I/O several months before the Pixel 4 launch. That could help drum up excitement, though less so if it won’t be available on existing devices.
FTC: We use income earning auto affiliate links. More.
Comments