has issued an apology after their new
AI
model,
Gemini
, generated
racially biased
image results in response to user queries. The company acknowledged the issue in a statement, attributing it to "limitations in the training data used to develop Gemini."
“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” said Google in a post on social media platform X.“We’re working to improve these kinds of depictions immediately.
Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
The controversy has garnered attention primarily from conservative voices critiquing a tech giant perceived as politically left-leaning, though not exclusively so. Recently, a former employee of Google posted on social media about the challenges of obtaining diverse
image results
using the company's AI tool. On social media, users highlighted difficulties in generating images of white individuals, citing searches like "generate a picture of a Swedish woman" or "generate a picture of an American woman," which predominantly yielded AI-generated people of colour, noted a report by The Verge.
The critique gained traction among right-wing circles, particularly regarding requests for images of historical figures such as the Founding Fathers, which purportedly yielded predominantly non-white AI-generated results. Some of these voices framed the outcomes as indicative of a deliberate attempt by the tech company to diminish the representation of white individuals, with at least one employing coded antisemitic language to assign blame.
Concerns remain about the potential for biased AI to reinforce existing stereotypes and contribute to systemic discrimination. Earlier this year,
OpenAI
was also accused of its AI tool promoting wrong stereotypes. OpenAI’s Dall-E image generator tool was asked to create images of a CEO and most of the results were pictures of white men.