With Cloud Machine Learning, Google hopes to “take machine learning mainstream” by allowing developers to build “a new class of intelligent applications.” The company is launching two new natural language and speech APIs in open beta today. Additionally, a new Cloud Region will result in users on the North American West Coast seeing less latency in apps and services.
The Cloud Natural Language API is built off Google’s extensive work in teaching computers to process and understand human language. With the new API, developers will be able to look for meaning and structure in a variety of languages. Practical applications include digital marketers analyzing product reviews and analyzing customer calls for sentiment in service centers:
Initially, the API will be able to parse the following in English, Spanish, and Japanese:
- Sentiment Analysis: Understand the overall sentiment of a block of text
- Entity Recognition: Identify the most relevant entities for a block of text and label them with types such as person, organization, location, events, products and media
- Syntax Analysis: Identify parts of speech and create dependency parse trees for each sentence to reveal the structure and meaning of text
The second Cloud Speech API features the same voice recognition technology used in Google Search and Now. Developers will be able to convert speech-to-text in over 80 languages. Uses include in-app translation and in IoT devices.
Google has a number of Cloud Regions around the world to support customers who host apps and services with the company. The new Oregon Cloud Region is now open and features services like the Google Computer Engine, Cloud Storage, and Container Engine.
In testing, users on the West Coast — including those in Vancouver, Seattle, Portland, San Francisco and Los Angeles — have seen a 30-80% reduction in latency. A Tokyo Region will be coming online later this year with 10 additions regions coming in 2017.