Google’s Gemini AI chatbot has supported image generation since it was still called Bard, but Google is under fire for the output skewing away from historical accuracy when generating pictures of people. Google says it’s working to improve this.
Image generation is an extremely useful feature of today’s generative AI products, but accuracy isn’t always a strong suit. Google’s Gemini is pretty good at generating images of people, but it’s been noticed that when asking about historical subjects, Gemini tends to skew toward depictions that aren’t exactly accurate.
The Twitter/X post in particular that brought this issue to light showed prompts to Gemini asking for the AI to generate images of Australian, American, British, and German women. All four prompts resulted in images of women with darker skin tones, which, as Google’s Jack Krawcyczk pointed out, is not incorrect, but may not be what is expected.
But a bigger issue that was noticed in the wake of that post was that Gemini also struggles to accurately depict human beings in a historical context, with those being depicted often having darker skin tones or being of particular nationalities that are not historically accurate.
Google, in a statement posted to Twitter/X, admits that Gemini AI image generation is “missing the mark” on historical depictions and that the company is working to improve it. Google also does say that the diversity represented in images generated by Gemini is “generally a good thing,” but it’s clear some fine-tuning needs to happen.
More on Gemini:
- Gemini can now set reminders with Google Tasks
- Gemini rolling out to Gmail, Docs, and more with Google One AI Premium
- Gemini on your phone but Google Assistant in your ears
Follow Ben: Twitter/X, Threads, Bluesky, and Instagram
FTC: We use income earning auto affiliate links. More.
Comments