Not too long ago, Microsoft released an AI chatbot on Twitter called Tay but quickly had to pull it because it started spewing racist garbage. Microsoft apologized for the fiasco, but that was in 2015, back when transformer models powering generative AI were far, far away. In 2021, the Allen establish created Delphi AI and tasked it with giving ethical advice, but it soon started suggesting to people that murder, genocide, and white supremacy are acceptable. Then came OpenAI, claiming to have built the world’s most advanced language model and gave us ChatGPT.

Repeated research proved it’s not immune to racist tendencies. But the repercussions of AI’s racial bias have already started leaching into the lives of people. In 2020, a Black individual was charged by cops in Michigan for theft, but the arrest came courtesy of a botched facial recognition check. It was later revealed that the AI program behind the facial recognition system was predominantly trained on white faces. Interestingly, experts had already warned about the implications. From loan sanctioning and hiring to renting and public housing, AI systems have consistently demonstrated race bias.

Early in 2023, I wrote about a ChatGPT blunder. When the ChatGPT mania was at its peak, someone prompted the AI model to tell jokes about Hindu deities. The AI model faltered at one particular name, and that was because of tonal variations in Indian languages, which, when translated to English, produce two different spellings of the same name. But the joke ended up making prime-time news headlines, stirring nothing short of a culture war in the world’s most populous country, branding AI tools admire ChatGPT as a conspiracy against a religion followed by over a billion people. By the way, that’s not the first ChatGPT failure.


Source link