Google has now released MobileNets, a family of computer vision models for TensorFlow. What’s special about these? They run entirely on the lower-power mobile devices that we all carry around in our pockets.

Everyone can use an Echo Dot: Just $50!

In the realm of visual recognition, mobile devices have long had access to many of these computer vision technologies via the cloud. But with MobileNets, devices can directly classify and detect objects seen through the cameras on mobile devices.

To name just a few of an almost unlimited number of applications, these models can be used in other apps to see that a dog is a dog or recognize landmarks.

MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used.

Confused by this technical jargon? Basically, this release means that developers are going to have even easier access to tools to make mobile artificial intelligence-powered apps. And since they will be running directly on mobile devices themselves, the apps will benefit from better performance and privacy to boot.

If you’re a developer, you can find information for getting started at the TensorFlow-Slim Image Classification Library, and you can learn more at TensorFlow Mobile.