‘Some of these faked results have been obvious and silly,’ said Liz Reid, head of Google Search.  ‘Others have implied that we returned dangerous results for topics.’

Google has defended itself for the so-called hallucinations displayed by its new AI Overview feature released in the US last week, saying that many instances were in fact fake screenshots shared widely.

Liz Reid, head of Google Search, claimed in a blogpost that user feedback shows people have “greater satisfaction” with their search results when they include AI Overview – a feature unveiled at Google I/O last week designed to give AI-boosted answers for Search queries, with generated summaries, tips and links to referenced sites.

The experimental feature is currently only available in the US, but Google has ambitious plans of bringing it to more than 1bn people by the end of 2024.

Soon after its launch, social media was flooded with instances of often-humorous errors.

One user claimed Google’s AI Overviews answered that former US president Andrew Johnson earned 14 degrees, graduating multiple times between 1947 and 2012. Johnson died in 1875.

Another example being widely shared is AI Overviews claiming that non-toxic glue can be added to pizza sauce to “give it more tackiness”. This appears to be based on a post from a Reddit user 11 years ago.

One answer from AI Overviews claims that parrots are able to do “a variety of jobs”, including housekeeping, engineering and “prison inmate”. Another even shared a post of AI Overviews claiming that adding more oil to a cooking oil fire “can help put it out”.

Why did they happen?

Google blames this on the “very different” ways in which AI Overviews works compared to chatbots and other LLM products.

“They’re not simply generating an output based on training data,” explained Reid. “While AI Overviews are powered by a customised language model, the model is integrated with our core web ranking systems and designed to carry out traditional ‘search’ tasks, like identifying relevant, high-quality results from our index.

“That’s why AI Overviews don’t just provide text output but include relevant links so people can explore further. Because accuracy is paramount in Search, AI Overviews are built to only show information that is backed up by top web results.”

According to Reid, the errors cropping up are not examples of AI Overviews “hallucinating” in the same way that AI chatbots – such as OpenAI’s ChatGPT – sometimes do.

Instead, she argues that when AI Overviews gets it wrong, it’s because of misinterpreting queries, misinterpreting a “nuance of language” on the web or simply not enough information available on the web.

Reid also spoke out against “fake” screenshots of the feature that have been shared widely.

“Some of these faked results have been obvious and silly,” she said. “Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression. Those AI Overviews never appeared. So we’d encourage anyone encountering these screenshots to do a search themselves to check.”

Improvements on the way

Now, Google says it is working on improving the feature that it has maintained is experimental and merely suggestive in nature.

“From looking at examples from the past couple of weeks, we were able to determine patterns where we didn’t get it right, and we made more than a dozen technical improvements to our systems,” Reid said.

These changes include better detection mechanisms for “nonsensical queries” that should not show an AI Overview, limiting the inclusion of satire and humorous content, limiting the use of user-generated content in responses that could offer misleading advice and adding triggering restrictions for queries where AI Overviews were not proving to be as helpful.

“For topics like news and health, we already have strong guardrails in place,” Reid added.

“For example, we aim to not show AI Overviews for hard news topics, where freshness and factuality are important. In the case of health, we launched additional triggering refinements to enhance our quality protections.”

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Source link