After pausing Gemini’s image generation feature over concerns about historical and ethnic errors, Google has published a new blog post explaining the mistake.
Google’s senior vice president for Knowledge and Information, Prabhakar Raghavan, said that Google wanted Gemini to be diverse and generate images that show a variety of people with different ethnicities and characteristics, but it did not account for scenarios where a user might want to see a specific group or person.
Some error examples include the AI tool generating racially diverse people when it was requested to generate groups of Nazi-era German soldiers, and adding non-white AI-generated people to user requests for figures like the Founding Fathers of the U.S.
Wow, very educational pic.twitter.com/5mkzuEVtzn
— Michael Tracey (@mtracey) February 21, 2024
Google Pauses Gemini’s Image Generator After It Was Accused of Being Racist Against White People https://t.co/cFSsOmMrQu pic.twitter.com/IlvwmLutt8
— Gizmodo (@Gizmodo) February 22, 2024
In explaining what went wrong, Raghavan said, “First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range.” He added, “And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.”
Raghavan reiterated that Google did not want Gemini to discriminate against any ethnicities and create historically false images, and hence, it paused the feature. Google is currently working to fix the issue, though Raghavan still doesn’t sound confident in Gemini’s ability to become error free. “I can’t promise that Gemini won’t occasionally generate embarrassing, inaccurate or offensive results — but I can promise that we will continue to take action whenever we identify an issue,” he said.
Read the full blog post here.
Source: Google