On-device machine learning offers a number of benefits from speed (lower latency) to privacy. To help increase adoption on Android, Google is now adding an “Android ML Platform” — primarily TensorFlow Lite — directly to Play services.
Google Play services is responsible for key user-facing features on Android and provides third-party app developers access to various tools. The latest will be on-device machine learning.
The company found several constraints that prevent apps from moving away from ML that occurs in the cloud. This includes file size constraints related to bundling additional libraries for ML to varying performance across devices resulting in significant stability and accuracy differences. Lastly, “maximizing reach can lead to using older, more broadly available APIs.”
Google’s solution to this is an “updateable, fully integrated ML inference stack” called the “Android ML Platform.” There are three components to it, starting with Google directly making available on-device inference capabilities on almost every Android device:
TensorFlow Lite will be available on all devices with Google Play Services. Developers will no longer need to include the runtime in their apps, reducing app size.
To ensure “optimal” performance across devices, a new Automatic Acceleration feature in TensorFlowLite for Android enables “per-model testing to create allowlists for specific devices taking performance, accuracy, and stability into account.” Available later this year, this determines whether or not hardware acceleration is enabled.
Lastly, Google will be updating Neural Networks API outside of Android OS releases. It’s also working with chipset vendors, like Qualcomm, to provide the latest device drivers outside of OS updates.
This will let developers dramatically reduce testing from thousands of devices to a handful of configurations. We’re excited to announce that we’ll be launching later this year with Qualcomm as our first partner.
FTC: We use income earning auto affiliate links. More.
Comments