Skip to main content

Google’s AI Overviews briefly started using articles about its viral mistakes to keep recommending glue on pizza

Google started rolling out AI Overviews in May and, quickly, wild mistakes may be the AI went viral, particularly including an example where Google suggested that users put glue on their pizza. Now, AI Overviews have started using articles about that viral situation to… keep telling people to put glue on pizza.

Since day one, Google has clearly stated that AI in Search, now called “AI Overviews,” may end up putting together information that isn’t fully accurate. That was clearly put on display as the functionality rolled out widely, as Overviews were spitting out sarcastic or satirical information as confident facts. The most viral example was Google telling users to put glue on their pizza to help the cheese stay in place, pulling that recommendation a decade-old Reddit comment that was clearly satire.

Google has since defended Overviews, saying that the vast majority are accurate, and explaining that the most viral mistakes were from queries that are very rare. AI Overviews have started to show far less frequently since those public mistakes, in part as Google committed to taking action against inaccurate or dangerous information. That included not showing AI Overviews on queries that were triggering the recommendation to put glue on pizza.

It was spotted by Colin McMillen, a developer on Bluesky, that Google was still recommending this, though, but in a new way. When searching “how much glue to add to pizza,” it was found that AI Overviews were providing updated information on the topic, this time sourced from the very news articles that had covered Google’s viral mistake. The Verge confirmed the same results yesterday (with a Featured Snippet even using the info), but it seems they’ve since been disabled by Google, as we couldn’t get any AI Overviews on that query or similar ones.

Image: Colin McMillen on Bluesky

Per Google’s explanation of rare queries providing incorrect info, it makes some sense that this would happen on this even more rare query.

But, should it?

That’s the important question and, thankfully, Google seems to continue to be on the case and preventing these mistakes from staying around. However, the key problem this situation shows is that AI Overviews are willing to pull in information that is clearly in the context of being incorrect or satire. When Google first started this effort, we took issue with the potential of Google’s AI pulling information from articles and websites that were generated by AI in the first place, but it seems like the human touch in online content will be equally tough for the AI to sort through.

More on AI Overviews:

Follow Ben: Twitter/XThreads, and Instagram

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Schoon Ben Schoon

Ben is a Senior Editor for 9to5Google.

Find him on Twitter @NexusBen. Send tips to schoon@9to5g.com or encrypted to benschoon@protonmail.com.


Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
Please wait...processing
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
Please wait...processing