In a post on its Research Blog, Google today announced that it is making some significant improvements to TensorFlow. For those unfamiliar, TensorFlow is the company’s open source machine learning software that powers things like Google Translate and many Photos features. Today, Google revealed that it is adding the ability for TensorFlow to run across multiple machines at the same time with distributed computing support.
With support for distributed computing, TensorFlow has the ability to get much, much smarter in a shorter period of time. The software works by analyzing large amounts of data, and with the support for running on multiple machines, it can now analyze even larger amounts of data in shorter windows of time. This means the systems will be smarter and faster in the end.
Today, we’re excited to release TensorFlow 0.8 with distributed computing support, including everything you need to train distributed models on your own infrastructure. Distributed TensorFlow is powered by the high-performance gRPC library, which supports training on hundreds of machines in parallel. It complements our recent announcement of Google Cloud Machine Learning, which enables you to train and serve your TensorFlow models using the power of the Google Cloud Platform.
TensorFlow was initially made open source last November when Google said that open source availability would allow for the system to be adopted into all sorts of difference products and research cases. Google hopes that with ope source availability and distributed computing support, researchers, engineers and hobbyists can help speed the machine’s learning along and help get it to a much smarter level in less time.
You can watch Google’s video on TensorFlow below: