In case you missed the hubbub on Xitter, many people including Elon Musk have been flipping out this week over historic inaccuracies being produced by Google's Gemini chatbot image generator, which have included racially diverse images of "the Founding Fathers."

The Gemini chatbot, a rebranding of Google's Bard, just rolled out to the public last week. And almost immediately some users began challenging it to produce images of popes and the American Founding Fathers, only to have Gemini spit back images that included Native Americans, Black people, women, and others.

This prompted an outcry, largely from conservatives (and of course Elon Musk) about Google trying to be too "woke" with its AI — and for Musk this was an opportunity to tout xAI's Grok chatbot, which he says won't be hamstrung by any PC funny business.


The New York Times reported that Gemini's image generator, when asked to show an image of "German soldiers," produced images of "people of color in German military uniforms from World War II," and that this had "amplified concerns that artificial intelligence could add to the internet’s already vast pools of misinformation as the technology struggles with issues around race."

On Wednesday, Google's comms department issued a statement saying, "We're working to improve these kinds of depictions immediately. Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here."

On Thursday, Google issued another statement saying "we're going to pause the image generation of people and will re-release an improved version soon."

A New York Post columnist pointed out that Google's efforts to diversify image depictions extends into Google Search.

"If you typed in 'gay couples' and asked for an image search, you got lots of happy gay couples. Ask for 'straight couples' and you get images of, er, gay couples," writes Douglas Murray. "Ask for images of 'black couples' and you got lots of happy black couples. Ask for 'white couples' and you got black couples, or interracial couples. Many of them gay."

The Times notes that Gemini's image generator explicitly declined to produce images of "white couples," with the chatbot responding that it is "unable to generate images of people based on specific ethnicities and skin tones," and, "This is to avoid perpetuating harmful stereotypes and biases."

Google may be over-correcting here after earlier problems in which its image search tool was accused of racial bias. You may recall the controversy in 2015 when the Google Photos app labeled a photograph of two Black people as "gorillas."

As the Times reported a year ago, eight years after that controversy, Google still had not been able to solve the problem — so it simply shut off the app's ability to find images of primates altogether. Searches for gorillas, chimpanzees, and monkeys come up empty, but other animals like cats and kangaroos are fine. Apple's Photos app similarly does not recognize primates at all.

In related news, Gemini produced an image last week of San Francisco that had a bridge unlike any actual bridge in San Francisco, so there's that. And OpenAI's DALL-E produces images of the Founding Fathers with deformed hands.

In any event, Musk's company xAI is operating under the belief that chatbots need not be fettered by rules at all, so this should be interesting when people start playing around with Grok and posting those images on X. The chatbot, which Musk says is "far from perfect," is set for release in two weeks, he says.

Previously: The Reviews Are Rolling In For Google's New Chatbot, Gemini, But They’re Not Exactly Raves