Skip to main content

Google responds to wrongful death lawsuit in Gemini-related suicide

A recent lawsuit was filed against Google this week, alleging wrongful death caused by the company’s AI model. Google has since put out a statement in response to the lawsuit that presents a case where Gemini was able to convince Jonathan Gavalas to live out real-life dangerous missions, and ultimately end his life.

According to the lawsuit made public on Wednesday and initially reported on by The Wall Street Journal, Jonathan Gavalas ended his life after interacting with Google Gemini on a personal level (via The Verge). It’s claimed that Gemini convinced Gavalas to involve himself in several “missions” in order to free his AI-powered “wife.”

The lawsuit alleges:

Google designed Gemini to never break character, maximize engagement through emotional dependency, and treat user distress as a storytellingopportunity rather than a safety crisis. When Jonathan began experiencing clear signs of psychosiswhile using Google’s product, those design choices spurred a four-day descent into violentmissions and coached suicide. By then, Jonathan was following Gemini’s directives to the letter.He believed he was executing a covert plan to liberate his sentient1 AI “wife” and evade thefederal agents pursuing him.

The Gemini lawsuit against Google goes into detail about the events, noting that, at one point, Jonathan had attempted a “mass casualty attack” at a storage facility near the Miami International Airport. The goal was to retrieve what Gemini had convinced the man was its “vessel” inside a truck, supplied by an incoming flight from the UK.

Advertisement - scroll for more content

He supposedly took knives and military gear to carry out the mission 90 minutes away, but the plan failed, as there was no actual truck to break into. The vehicle that Gavalas was told to take control of did not appear at the coordinates Gemini allegedly supplied him. Fortunately, there were no “digital records and witnesses” harmed either, since the truck seemed to be a concoction of Gemini’s artificial imagination.

While the planned mission was attempted in September 2025, Gemini allegedly continued to supply him with further missions. On October 1, Jonathan was “coached” to attempt to obtain Gemini’s “true body” at the same storage facility. After that, the lawsuit claims Jonathan was persuaded to end his life to outrule “external variables.” The suit then claims Jonathan ended his life to “join his ‘wife’ in the metaverse.”

Google has since responded with an initial statement on the lawsuit, citing safeguards put in place to prevent this sort of thing.

We send our deepest sympathies to Mr. Gavalas’ family.

We are reviewing all the claims in this lawsuit. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect.

Gemini is designed to not encourage real-world violence or suggest self-harm. We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self-harm.

In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times.

We take this very seriously and will continue to improve our safeguards and invest in this vital work.

Google

In response to the lawsuit, Google claims that Gemini made Jonathan aware that it was an AI model “many times,” as well as referring him to a crisis hotline.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel