While on stage today at Google I/O 2017, CEO Sundar Pichai announced the company’s second generation TPUs (Tensor Processing Units), a cloud-computing system of software and hardware that aid in machine learning workloads.
Google uses its powerful TPUs to handle search results, Google Photos, and other services, taking advantage of Google’s TensorFlow software library for fast results. The new system connects 64 TPUs together on a server rack to create a supercomputer that Google calls a TPU Rack, providing an incredible 11.5 petaflops of computing power.
New to the second generation TPUs is training, which feeds thousands of inputs into an algorithm to form new ways of performing tasks. This training, combined with the massive computing power of the new TPU Racks, will lead to incredibly fast results on Google’s services, along with the services from anyone using TensorFlow. Google is offering free access to 1000 of its new TPUs to AI researchers who commit to publishing and open-sourcing their results.
The company also launched Google.ai, a site centralizing information on its AI services, including neural machine learning (Google Translate), Federated Learning, and the Cloud TPUs.
FTC: We use income earning auto affiliate links. More.
Comments