DeepMind is Alphabet’s AI research lab, and today, it unveiled AndroidEnv as a platform that allows reinforcement learning agents to “interact with a wide variety of apps and services commonly used by humans through a universal touchscreen interface.”
A branch of machine learning, reinforcement learning (RL), allows a system to learn through trial and error. In AndroidEnv, agents — like humans — make decisions based on what’s displayed and navigate through taps/gestures. DeepMind says the “set of possible services and applications with which the agent could interact is virtually unlimited” given that it’s Android.
For example, an agent might browse the internet, open the YouTube app, set an alarm or play a game. The possibility for RL agents to operate on a real-world platform used by billions of people on a daily basis opens up novel research opportunities.
Besides the wide possibility, AndroidEnv is promising for requiring agents to overcome transfer and generalization, temporal abstraction, real-time dynamics, and scale.
Agents can be tasked with accomplishing actions like “finding directions to the park, booking a flight or maximizing the score in a game.” In terms of what AndroidEnv could allow for, DeepMind imagines that:
the ability to automatically learn sequences of actions might lead to advanced hands-free voice navigation tools; on-device AI models could help provide a better user experience; and trained agents could assist in device testing and quality assurance by benchmarking new apps, measuring latency, or detecting crashes or unintended behaviours in the Android OS.
More about DeepMind:
- Alphabet’s DeepMind makes AI breakthrough with AlphaFold that could aid drug research
- Google Maps ETAs prioritizing recent traffic patterns as DeepMind AI improves predictions
- Alphabet’s DeepMind hopes to aid researchers with AI insight into COVID-19 virus structure
- Google Play Store’s app recommendation system is powered by DeepMind
FTC: We use income earning auto affiliate links. More.
Comments