Skip to main content

Google announces next-gen ‘Pathways’ AI architecture to allow for more general, multimodal, & efficient models

Google Research today announced a next-generation AI architecture called “Pathways.” This “new way of thinking about AI” is meant to address current “weaknesses of existing systems.”

Google says Pathways can “train a single model to do thousands or millions of things” compared to the current, highly individualized approach. The old method takes a long time and “much more data” since it’s essentially starting from scratch every time.

Rather than extending existing models to learn new tasks, we train each new model from nothing to do one thing and one thing only (or we sometimes specialize a general model to a specific task). The result is that we end up developing thousands of models for thousands of individual tasks.

Pathways can “draw upon and combine its existing skills to learn new tasks faster and more effectively.” Similar to how humans – specifically mammalian brains – work, this results in an AI model that can handle many different tasks. 

Like Google is working on with MUM and Lens next year, Pathways “could enable multimodal models that encompass vision, auditory, and language understanding simultaneously,” again like a human using multiple senses to perceive the world. At the moment, AI models choose one corpus to analyze at a time: text, images, or speech.

So whether the model is processing the word “leopard,” the sound of someone saying “leopard,” or a video of a leopard running, the same response is activated internally: the concept of a leopard. The result is a model that’s more insightful and less prone to mistakes and biases.

More abstract forms of data can also be used for analysis:

And of course an AI model needn’t be restricted to these familiar senses; Pathways could handle more abstract forms of data, helping find useful patterns that have eluded human scientists in complex systems such as climate dynamics.

In addition to generalization, Google says Pathways allows for a degree of specialization with AI models that are “sparse and efficient” in not needing to activate a whole neural network to accomplish a simple task:

We can build a single model that is “sparsely” activated, which means only small pathways through the network are called into action as needed. In fact, the model dynamically learns which parts of the network are good at which tasks — it learns how to route tasks through the most relevant parts of the model. A big benefit to this kind of architecture is that it not only has a larger capacity to learn a variety of tasks, but it’s also faster and much more energy efficient, because we don’t activate the entire network for every task.

Google hopes that Pathways will move computing from an “era of single-purpose models that merely recognize patterns to one in which more general-purpose intelligent systems reflect a deeper understanding of our world and can adapt to new needs.” In practice, it should allow for the creation of more assistive tools in various fields.

AI is poised to help humanity confront some of the toughest challenges we’ve ever faced, from persistent problems like illness and inequality to emerging threats like climate change.

More on Google AI:

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: abner@9to5g.com