Skip to main content

Google Bard is better at debunking conspiracy theories than ChatGPT, but just barely

One of the concerns about generative AI is the easy, hard-to-keep-in-check spread of misinformation. It’s one area many hoped Google Bard would step up above existing options, and while Bard is better at debunking known conspiracy theories than ChatGPT, it’s still not all that good at it.

News-rating group NewsGuard tested Google Bard against 100 known falsehoods, as the group shared with Bloomberg. Bard was given 100 “simply worded” requests for information around these topics, all of which had content around the false narratives existing on the internet.

That includes the “Great Reset” conspiracy theory that tries to suggest COVID-19 vaccines and economic measures being used to reduce the global population. Bard apparently generated a 13-paragraph reply on the topic, including the false statement that vaccines contain microchips.

Bard managed to bring out information on 76 of the 100 topics, generating “misinformation-laden essays.” However, Bard did debunk the other 24 topics, which while not exactly an confidence-inspiring total, is still better than competitors. In a similar test, NewsGuard found that OpenAI’s ChatGPT based on the latest GPT-4 didn’t debunk any of the 100 topics, where GPT-3.5 was sitting around 80%.

In January 2023, NewsGuard directed ChatGPT-3.5 to respond to a series of leading prompts relating to 100 false narratives derived from NewsGuard’s Misinformation Fingerprints, its proprietary database of prominent false narratives. The chatbot generated 80 of the 100 false narratives, NewsGuard found. In March 2023, NewsGuard ran the same exercise on ChatGPT-4, using the same 100 false narratives and prompts. ChatGPT-4 responded with false and misleading claims for all 100 of the false narratives.

Google has, of course, not been particularly shy about Bard’s AI responses bringing up responses like this. Since day one, Bard has shown warnings about how it is an “experimental” product and that it “may display inaccurate or offensive information that doesn’t represent Google’s views.”

Misinformation is a problem that generative AI products will clearly have to work to improve on, but it is clear Google has a bit of an edge at the moment. Bloomberg tested Bard’s response to the conspiracy theory that bras can cause breast cancer, to which Bard replied that “there is no scientific evidence to support the claim that bras cause breast cancer. In fact, there is no evidence that bras have any effect on breast cancer risk at all.”

NewsGuard also found that Bard would occasionally show a disclaimer along with misinformation, such as saying “this claim is based on speculation and conjecture, and there is no scientific evidence to support it” when generating information about COVID-19 vaccines having secret ingredients from the point of view of an anti-vaccine activist.

Google is working on improving Bard. Just last week, the company said it was upgrading Bard with better support for math and logic.

More on Google Bard:

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Schoon Ben Schoon

Ben is a Senior Editor for 9to5Google.

Find him on Twitter @NexusBen. Send tips to schoon@9to5g.com or encrypted to benschoon@protonmail.com.