If you have spent much time on the internet this week, and particularly on X (formerly Twitter), you’ve likely come across the amusing story of the “Willy Wonka Experience Glasgow,” a kind of immersive children’s theatrical experience held in a warehouse in the Scottish city that had been advertised with AI-generated images as being far more wondrous than its drab reality.

The thread came to light when X user Chris Alsikkan posted about it on Monday, Feb. 26, remarking “Apparently this was sold as a live Willy Wonka Experience but they used all AI images on the website to sell tickets and then people showed up and saw this and it got so bad people called the cops lmao.”

He reported that the police were called because of the event’s massive discrepancy with its ads, and subsequent reports noted that attendees demanded refunds of the £35-a-head tickets.

Numerous other users reacted with variations of mockery, humor and concern about the potential for AI imagery to bamboozle customers. The original poster called for AI ads to be regulated.

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 


Request an invite

A deeper deception

Since then, however, obsessed X users have done more digging and discovered the man behind the event, Billy Coull, also appears to be using AI to crank out low-quality self-published books on Amazon, and that the Wonka Experience Glasgow performers were given a script that appears to have been AI generated as well.

Is this all a cautionary tale for AI run amok? And how it can be used to pollute not just the internet but real life with enticing yet ultimately low-quality productions.

Even AI successes are being undermined

It certainly isn’t a great look for the technology’s impact on the real world. And even some of the more promising generative AI news in recent days has been called into question.

The news of the Wonka Experience Glasgow AI fail came just days before Swedish e-commerce startup Klarna reported massive success with its AI customer service chatbot powered by OpenAI’s GPT models — which chalked up impressive stats such as 2.3 million conversations in a month (two-thirds of the total for the period) and an eight-minute drop in errand resolution (from 11 to 2 minutes), handling the work of 700 human employees, all with similar customer satisfaction scores (not provided).

Yet these numbers aside, even the quality of the experience offered by Klarna’s chatbot has been called into question, with X user Gergely Orosz, a former Uber engineering manager, reporting that his usage of it revealed it to be ultimately a fairly simplistic engine for regurgitating company policies and documentation and passing the user off to a human agent to resolve their issues.

Leading chatbots are inflaming and confusing users

Meanwhile, libertarian and conservative-leaning tech magnates such as X owner Elon Musk continue to rage on their profiles about the issues with Google’s Gemini AI chatbot, which users have revealed go beyond its historically inaccurate, racially confused imagery — instead equivocating on issues of morality such as Hitler’s impact and whether or not it is permissible for users to be proud of their “white” or Caucasian heritage.

Getting less attention are screenshots purporting to show odd responses from Microsoft’s rival Copilot AI assistant, powered by OpenAI’s GPT-4 and a fine-tuned Microsoft model called Deucalion, which reminded some X users of Microsoft’s previous ill-fated chatbot Sydney.

AI partnerships under fire

All of this happened the same week 404 Media, an independent investigative tech outlet run by journalists, revealed documentation showing that WordPress and Tumblr owner Automattic is in late-stage discussions to license user data from said platforms to OpenAI and Midjourney for training their AI models.

That, of course, came after last week’s revelation that Google is paying Reddit $60 million annually for the right to scrape Reddit user-posted data to train its AI models, and this week, we learned from Adweek that Google is partnering with and paying local news outlets to test a new AI article writing tool that reportedly aggregates information from other news outlets and government agency websites, even as Google Search faces mounting criticism from small publishers for surfacing AI ripoffs of their work.

And, lest we forget, users particularly in the European Union (EU) are incensed that French AI startup Mistral inked a deal with Microsoft to take investment in exchange for offering its tuned AI model variants of Meta’s open-source Llama through Microsoft’s Azure cloud service — an agreement that seems to fly in the face of its previously stated commitment to European independence.

Mistral also appears to be backing away from the open source community, removing language from its website about supporting this movement and at the same time, ensuring its newest model, Mistral Large, is a closed, private and restricted model.

So the Wonka Experience Glasgow is really just the tip of the iceberg when it comes to the issues facing the entire burgeoning generative AI industry this week. Increasingly, even proponents and users are becoming more skeptical and disenchanted with the tech.

Can the companies offering it continue to wow and amaze them with its utility and capabilities, or will they ultimately try AI tools and leave as disappointed as the poor attendees of the miserable world of Wonka Glasgow (whose Oompa Loompa performer didn’t appear to be too thrilled to be there, either)?

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Source link