Google AI

Following yesterday’s Spring (or Fall) Doodle that also appeared in the Pixel Launcher’s search bar, Google is celebrating Johann Sebastian Bach. This music making experience is Google’s first AI-powered Doodle, and lets users create a harmonized melody in the style of the Baroque composer.
In recent months, G Suite has added a number of machine learning-powered features to boost end-user productivity. After launching on the web last year, automatic room suggestions are coming to Google Calendar for Android and iOS.
One of the more impressive announcements that Google made at I/O 2018 was “Lookout.” This app helps the visually impaired see by pointing their phone at objects and getting verbal feedback. Lookout for Android is now available for the Google Pixel.
For the past several releases, Gboard for Android has been working on “faster voice typing” that works offline. Google is today making it official on Pixel phones, and details the “end-to-end, all-neural, on-device speech recognizer” it created.
For the past three years, Google and Verily have leveraged machine learning to screen for the two leading causes of preventable blindness in adults. In India, this algorithm is now being used in a clinical setting, while the European Union has certified it as a medical device.
A report yesterday revealed that the upcoming Google Maps AR navigation mode is first being tested with Local Guides. Google is now detailing the “global localization” technique behind the feature and how it leverages a Visual Positioning Service, Street View, and machine learning.
Speech synthesis technology has advanced a great deal in recent years, with neural networks from DeepMind doing an especially good job of creating realistic, human-like voices. Like with any technology, it can be abused and Google is working to advance state-of-the-art research on fake audio detection.
Top Shot is one of the many AI-powered camera features Google introduced with the Pixel 3. Google AI is now detailing how the smart feature works and what qualities your phone is looking for when it suggests an alternate frame.
Over the past year, Google AI has opened a number of research facilities around the world. The latest will be a lab at Princeton University to foster collaboration with the academic community.
Back in June, Google released AI Principles — in response to Project Maven backlash — that codified how artificial intelligence would be used in research and products going forward. The company is now detailing additional initiatives and processes that it implemented to ensure that all guidelines are enforced.
The potential uses for AI are vast with Google already applying it to consumer products and services for third-party developers. These APIs include natural language processing, TTS, and image recognition. However, unlike other companies, Google does not offer facial recognition technology and that absence is on purpose.
Last month, Google Translate received the Google Material Theme on the web along with a responsive design. In line with other company-wide efforts to “promote fairness and reduce bias in machine learning,” Google Translate will now provide feminine and masculine translations for some gender-neutral words.
Unlike other smartphone cameras that feature a Portrait Mode, the Pixel line gets by with only one rear camera. With the Pixel 3, Google turned to machine learning to improve depth estimation and “produce even better Portrait Mode results.”
From Call Screen on the Pixel 3 to Gmail Smart Reply, machine learning is already being used in everyday Google products. The company has also been encouraging adoption with various courses and start-up programs. With the Google AI Impact Challenge today, Google is committing resources and $25 million to address societal challenges.
Earlier this week, Google announced that it was piloting a machine learning intensive for college students. Today, its broader Machine Learning Crash Course is adding a new training module on fairness when building AI.
One of the most impressive camera tricks that Google introduced with the Pixel 3 last week is Super Res Zoom. The Google AI team behind the feature detailed the technical aspects today, and how Google wants to challenge the idea that digital zoom is “the ‘ugly duckling’ of mobile device cameras.”
As we noted last night, the Google Pixel 3 and Pixel 3 XL deliver delightful hardware that is noticeably more refined than last year. Beyond the physical devices, a key part of Made by Google is the “AI + software” experience, with the Pixel 3 debuting a handful of new Google AI-powered features at launch.
Google is leveraging machine learning throughout all of its products from bilingual support in Assistant to productivity optimizations in Drive, and even to cool data centers. The company is now tapping AI to improve flood forecasting as part of Google Public Alerts.
At I/O 2018, Google AI was announced as a company-wide initiative that encompasses Google Research. Following that AI first approach, Google has been opening research centers around the world. The latest in France is now open, amid further expansion into the country.
Google announced bilingual support for Assistant back in February, and at IFA 2018 it’s beginning to roll out the functionality in six languages. Users can speak to Assistant in two default languages with phones and smart speakers able to understand and reply back in either.
In Android 9 Pie, Alphabet’s DeepMind division is responsible for machine learning features like Adaptive Battery and Brightness. One of the first collaborations between the two companies was an AI system tasked with increasing energy efficiency at Google’s data centers. Two years later, an AI has been granted direct control over cooling these servers.
As Google moves to an AI First world, all of its products are being infused with machine learning smarts. Nowhere is that more apparent in G Suite as Google tries to differentiate its Cloud offerings. Google Drive is working on a new “Priority” view that features a Feed, as well as user-curated Workspaces.
Starting with the Machine Learning Crash Course in February, Google has released a number of tools and resources for developers to learn and integrate artificial intelligence. Seedbank is the latest and a home to interactive ML examples that run in the web and can be quickly edited.
Google unveiled a big revamp for all its advertising products last month that simplified branding and introduced new features. Numerous services will be consolidated and better integrated with one another. Today, at Google Marketing Live, the company is better detailing these changes that leverage machine learning.