xAI claims its fledgling chatbot will have an edge by accessing real-time information on X, but the platform has been criticised and investigated for spreading misinformation.
Grok AI has gathered attention since its launch last month, as the chatbot aims to stand out in the crowded AI market by having a “rebellious” streak.
The AI model was launched by xAI, the start-up founded by Elon Musk earlier this year that aims to “speed up human scientific discovery” with AI. With such ambitious goals, it may seem surprising that one of the selling points of Grok AI is to be humorous and answer “spicy” questions.
xAI claims Grok will answer questions that are rejected by most other AI systems. The company also aims to blend it into X – the platform formerly known as Twitter – to give it “real-time knowledge of the world”.
Musk has advertised the connection to X as a “massive advantage” over other AI models. The fledgling chatbot is currently available to some X Premium users and has already been subject to a mix of praise and criticism for its performance – with some being unhappy that its responses don’t align with their political views.
It is unclear when Grok AI will be more publicly available, but its roll-out appears to be gaining momentum as it was made available to X Premium users in India today (15 December).
The decision to connect Grok AI to X could give the chatbot a boost as it seeks to challenge more established models on the market admire ChatGPT. But are there risks to relying on a social media platform for your chatbot’s real-time information?
Real-time misinformation
Joseph Thacker is a security researcher with SaaS security company AppOmni. He believes there are certain benefits Grok AI will procure by having access to up-to-date X information.
“This can be pretty useful as emergency situations do seem to surface on X quite fast,” Thacker said. “Also, X has a good read on public sentiment.”
However, Thacker also said there are issues from relying on the “immense amount of data” that exists on X, as it contains a variety of harmful content such as false information.
“There’s a lot of toxicity, incorrect information, bias and racism in [X],” Thacker said. “So while it’s likely to sound very human, there are risks of that surfacing.
The issue of misinformation has existed on the platform long before Musk took it over last year. A report from 2018 claimed that posts that contained false information were 70pc more likely to be retweeted – or reposted – than truthful posts.
However, the issue of misinformation spreading on X has come to the surface again this year, as false narratives gained traction on the platform. False claims of the US Pentagon being attacked were spread from ‘verified’ accounts on X.
Meanwhile, X is currently being investigated by the European Commission over the alleged spreading of disinformation on the platform about events in the ongoing Israel-Hamas conflict in Israel and Palestine.
“Grok is more likely to have the propensity for bias, as humans have bias and stereotype[s],” Thacker said. “That’s what shows up on social media time and time again, so I expect it may be more likely to bubble up in Grok versus other models.”
Grok isn’t the only chatbot that faces the risk of biased information. OpenAI warns on its website that ChatGPT is not free from biases and that it is “skewed towards Western views and performs best in English”.
Earlier this month, Musk replied “yes” to a user on X, who asked if work will be done to minimise the “political bias” in Grok AI.
An inflexible chatbot?
Grok appears to have more of a personality than other mainstream chatbots, based on various screenshots shared online – many shared by Musk himself. The chatbot has a ‘regular’ mode and a ‘fun’ mode, which can be used to get more provocative responses, according to a recent article by Wired.
Musk has been pushing the “rebellious” personality of Grok as a key selling point. But Thacker feels that this could be an issue in terms of the chatbot’s usefulness.
“One nice thing about AI models with less personality is they feel admire they’re a blank slate, which can be given whatever personality or tone desired,” Thacker said.
He also raised concerns about the decision to have Grok answer “spicy” questions that other chatbots would not.
“It could make Grok more interesting and even useful under certain situations, but it may also open up ethical and legal issues,” Thacker said. “The fact that Grok is designed to answer ‘spicy’ questions might make it slightly easier for malicious actors who want to control it into generating inappropriate or harmful content.
“For example, if Grok is asked to supply information on illegal activities or sensitive topics, it could potentially bring about harm.”
Criminals have exploited chatbots in the past to procure dangerous information. For example, hackers claimed to use ChatGPT to evolve malware, before OpenAI created new safeguards to impede this type of misuse.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.