Google to release 'improved' Gemini AI image generator after "woke" backlash

8 months ago 15

Google to release 'improved' Gemini AI image generator after "woke" backlash

Google responds to bias accusations in its Gemini AI image generator by turning off image generation of people, apologizing, and planning to launch an improved version while addressing tuning and cautiousness issues.

Google

is responding to criticism and accusations of bias within its

Gemini AI image generator

. After facing

backlash

for a perceived failure to accurately depict white people in generated images, the

technology company

has announced a new image generation tool it says aims to address these concerns. Google apologised and said “we did not want Gemini to refuse to create images of any particular group. And we did not want it to create inaccurate historical — or any other — images. So we turned the image generation of people off and will work to improve it significantly before turning it back on. This process will include extensive testing.”
Now, a senior Google executive has confirmed that the company will launched an

improved

version of Gemini AI image generator in the coming weeks. Google DeepMind CEO Demis Hassabis revealed this at a panel discussion at Mobile World Congress in Barcelona. “We have taken the feature offline while we fix that,” Hassabis said. “We are hoping to have that back online very shortly in the next couple of weeks, few weeks,” he added.
The issue came to light as users shared their results, including historical scenes that originally featured exclusively white individuals being re-imagined with diverse casts. This prompted accusations that Google had intentionally programmed a bias against white people into Gemini, with some critics labeling the tool as "

woke

" and politically motivated.
According to Google, two things went wrong. The company said that its tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. Secondly, the model became way more cautious than Google intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive. "These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong,” said the company in a blog post.

Article From: timesofindia.indiatimes.com
Read Entire Article



Note:

We invite you to explore our website, engage with our content, and become part of our community. Thank you for trusting us as your go-to destination for news that matters.

Certain articles, images, or other media on this website may be sourced from external contributors, agencies, or organizations. In such cases, we make every effort to provide proper attribution, acknowledging the original source of the content.

If you believe that your copyrighted work has been used on our site in a way that constitutes copyright infringement, please contact us promptly. We are committed to addressing and rectifying any such instances

To remove this article:
Removal Request