Large language models (LLMs), like Google Bard, are trained on “trillions of words” to learn how humans talk and predict/generate the next words in a sentence that make for a good response. However, this technology is not actually aware of what’s factually correct. Given the newness of it, some are – understandably – not yet aware of this fact. In the case of Bard, one particular phenomenon that has occurred days, if not hours, after the early access launch is that people are equating its responses with official Google news and announcements.
Since the underlying mechanism of an LLM is that of predicting the next word or sequences of words, LLMs are not fully capable yet of distinguishing between what is accurate and inaccurate information. For example, if you ask an LLM to solve a mathematical word problem, it will predict an answer based on others it’s learned from, not based on advanced reasoning or computations.
Google
Since the launch on Tuesday, people have asked Bard about unreleased Google products and whether the company is going to do such and such a thing. Some of the generated responses clearly are caveated as being rumors.
However, other Bard-generated answers read as bold announcements of future product plans, like: “Yes, Google is planning to integrate Bard into Google Assistant.” After explaining what both Assistant and Bard are, the response says this is happening “in the coming weeks.” Even more bold is how it claims to describe some of the future functionality and describe Google’s alleged thinking/rationale behind the move.
None of this is true
None of that is true and instead is more than likely based on articles or comments simply saying that Google should integrate Bard into Assistant. The company has not definitively made this announcement, given a time frame, or previewed possible features. Bard derives its knowledge from public information that you can already find, and not, for example, internal company documents.
Another particularly egregious example is somebody writing a news post about when the Pixel 7a will launch, complete with a date, entirely based on screenshots of someone else’s conversation with Bard. Similarly, taking a response from Bard and making it seem that it’s an official Google position or stance is just downright irresponsible and borders on misinformation.
In terms of information literacy, I’d say that every time somebody has posted a Bard-derived “Google announcement,” others have been quick to point out the nature of LLMs and hallucinations. Of course, these conversations have been going on in more tech-minded circles. It worries me what conversations outside this literate audience look like.
Even more worrying is what happens when people just quickly glance at a title or brief message and absorb that piece of Bard-generated “information” as a point that contributes to what they think is happening/true.
As a piece of technology rolls out, people will eventually learn of its limitations. Ironically, in this case, awareness of what LLMs can and cannot do have greatly benefited from the freakout about perceived consciousness/sentience, acts of gaslighting, and other bizarre behaviors. In that sense, I’m somewhat hopeful that we’ll get past this period in relatively short order.
But until then: Bard does not have any inside knowledge about unreleased Google products, their release dates, and other announcements or news just because it’s made by Google.
FTC: We use income earning auto affiliate links. More.
Comments