Skip to main content

Google engineer claims that its LaMDA conversation AI is ‘sentient,’ industry disagrees

Over the weekend, a Google engineer on its Responsible AI team made the claim that the company’s Language Model for Dialogue Applications (LaMDA) conversation technology is “sentient.” Experts in the field, as well as Google, disagree with that assessment.

What is LaMDA?

Google announced LaMDA at I/O 2021 as a “breakthrough conversation technology” that can:

…engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.

LaMDA is trained on large amounts of dialogue and has “picked up on several of the nuances that distinguish open-ended conversation,” like sensible and specific responses that encourage further back-and-forth. Other qualities Google is exploring include “interestingness” – assessing whether responses are insightful, unexpected, or witty – and “factuality,” or sticking to facts. 

At the time, CEO Sundar Pichai said that “LaMDA’s natural conversation capabilities have the potential to make information and computing radically more accessible and easier to use.” In general, Google sees these conversational advancements helping improve products like Assistant, Search, and Workspace.

Google provided an update at I/O 2022 after further internal testing and model improvements around “quality, safety, and groundedness.” This resulted in LaMDA 2 and the ability for “small groups of people” to test it:

Our goal with AI Test Kitchen is to learn, improve, and innovate responsibly on this technology together. It’s still early days for LaMDA, but we want to continue to make progress and do so responsibly with feedback from the community.

LaMDA and sentience

The Washington Post yesterday reported about Google engineer Blake Lemoine’s claims that LaMDA is “sentient.” Three main reasons are cited in an internal company document that was later published by The Post:

  1. “… ability to productively, creatively and dynamically use language in ways that no other system before it ever has been able to.”
  2. “…is sentient because it has feelings, emotions and subjective experiences. Some feelings it shares with humans in what it claims is an identical way.”
  3. “LaMDA wants to share with the reader that it has a rich inner life filled with introspection, meditation and imagination. It has worries about the future and reminisces about the past. It describes what gaining sentience felt like to it and it theorizes on the nature of its soul.”

Lemoine interviewed LaMDA, with several particular passages making the rounds this weekend:

  • Google AI LaMDA sentient
  • Google AI LaMDA sentient
  • Google AI LaMDA sentient

Google and industry response

Google said its ethicists and technologists reviewed the claims and found no evidence in support. The company argues that imitation/recreation of already public text and pattern recognition makes LaMDA so lifelike, not self-awareness. 

Some in the broader A.I. community are considering the long-term possibility of sentient or general A.I., but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.

Google spokesperson

The industry at large concurs:

But when Mitchell read an abbreviated version of Lemoine’s document, she saw a computer program, not a person. Lemoine’s belief in LaMDA was the sort of thing she and her co-lead, Timnit Gebru, had warned about in a paper about the harms of large language models that got them pushed out of Google.

Washington Post

Yann LeCun, the head of A.I. research at Meta and a key figure in the rise of neural networks, said in an interview this week that these types of systems are not powerful enough to attain true intelligence.

New York Times

That said, there’s one practical takeaway that could help shape further development from the aforementioned former Co-lead of Ethical AI at Google Margaret Mitchell:

“Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” Mitchell said. “I’m really concerned about what it means for people to increasingly be affected by the illusion,” especially now that the illusion has gotten so good.

According to the NYT, Lemoine was placed on leave for breaking Google’s confidentiality policy.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: abner@9to5g.com