Despite what they said was extensive testing before launch, Google is admitting that its "AI Overview" system has been prone to spreading untruths, and some very weird untruths at that.
As we discussed last week, Google's "AI Overviews," which began appearing at the top of some searches two weeks ago, purport to provide quick summary answers to questions based on search queries. When or why an AI Overview might appear is not clear, but the algorithm seemed to pick and choose certain historical facts, how-to questions, and other random queries as candidates to get an automatic overview.
In a couple of cases that SFist observed, the overview spat out misinformation — referring to "Dolley Madison's hotel" as the location of the first inaugural ball, for instance. Dolley Madison was the first lady, but the event took place at Long's Hotel in Washington, DC. And people began noticing all kinds of weirdness in these overviews, the more of them that were generated.
Google tried to sidestep these issues, saying that such misinformation was rare.
But as CBS News reports, Google now admits that there are enough problems with the AI Overview module that the company is scaling it back for now.
Google's head of search, Liz Reid, posted to the company blog Thursday about the issues with AI Overview, saying, "In the last week, people on social media have shared some odd and erroneous overviews (along with a very large number of faked screenshots)... We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously."
Reid goes on to explain that AI Overviews are generally driven by web search results themselves, and thus "AI Overviews generally don't 'hallucinate' or make things up in the ways that other LLM (large language model) products might. When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available."
None of those would really explain the "Dolley Madison's hotel" business. But perhaps that does explain why users got overviews that, for instance, suggested using glue to get cheese to stick to pizza. One would think there's a ton of good information on the web about passing kidney stones, but apparently users found AI Overview advice telling them to drink urine to pass stones more quickly.
Reid admits that "some odd, inaccurate or unhelpful AI Overviews certainly did show up," and while, she says, these "were generally for queries that people don’t commonly do," she concedes "it highlighted some specific areas that we needed to improve."
Reid now says the company is scaling back the overviews, in part by using "triggering restrictions for queries where AI Overviews were not proving to be as helpful." The company says it will also avoid trying to summarize hard news topics "where freshness and factuality are important."
This is the latest gaffe by Google, a company that was once synonymous with infallibility, as it wades into the still uncharted waters of AI.
Earlier this year, the company faced widespread criticism for the image-generation component of its Gemini chatbot, which was creating fictitious images for prompts like "The Founding Fathers," inserting people of color for the sake of diversity. This forced Google to temporarily suspend the image generator, so that it could be retooled.
Previously: Google's 'AI Overview' Gets Facts Wrong, Is Worse Than a Regular Search