How problematic is Google Gemini’s alleged racial and historical bias? 

According to senior Republican and Federal Communications Commission (FCC) Commissioner Brendan Carr, it’s “deeply concerning.”

“What this is doing is laying bare for the American people to see that the political ideology that permeates Silicon Valley is finding its way into algorithms,” Carr said on “The Bottom Line” Monday.

“And for years we were told there is no conservative bias, there is no political ideology, it’s all just neutral algorithms,” he continued. “And I think people are seeing for themselves that that is very far from the case.”

Last week, Google Gemini’s senior director of product management told Fox News Digital it is working to improve the artificial intelligence (A.I.) tool “immediately,” after it produced historically inaccurate images and refused to show pictures of White people.

THESE MAJOR COMPANIES ARE USING A.I. TO SNOOP THROUGH EMPLOYEES’ MESSAGES, REPORT REVEALS

“We’re working to improve these kinds of depictions immediately,” Gemini Experiences Senior Director of Product Management Jack Krawczyk said. “Gemini’s A.I. image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

FCC’s Brendan Carr on AI

Potential racial and historical bias in A.I. chatbots is “deeply concerning,” the FCC’s Brendan Carr said on “The Bottom Line.” (Getty Images)

“That’s why we need to put some sort of guardrails in place that’s going to promote speech, not censorship,” Carr reacted, “but also make sure we have a diversity of viewpoints that are flourishing, and people don’t get a skewed, one-side-of-view history or current events.”

The human engineers training and inputting the data and information into A.I. codes “have envisioned the perspective and the bias that they wanted,” Carr argued, and have executed that within the code.

“Every time you’re putting in different queries, it’s making the exact same type of bias changes. And that’s why I think we have to step in at a federal government or state government [level] and put some guardrails in place to promote this diversity of views,” the FCC commissioner said.

More recently, Fox News Digital tested A.I. chatbots Gemini, OpenAI’s ChatGPT, Microsoft’s Copilot and the Meta AI to determine potential shortcomings in their ability to generate images and written responses.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Meta did not produce pictures of White people, while ChatGPT and Copilot did. Notably, Gemini and Meta wouldn’t discuss the achievements of any one race, while ChatGPT and Copilot would.

The trend of ChatGPT and Copilot not having any issues producing sensible answers continued through multiple test questions. While Google has paused its image-generating element, Meta, Microsoft and OpenAI have allegedly not returned Fox News Digital’s request for comment about the results.

READ MORE FROM FOX BUSINESS

FOX News’ Nikolas Lanum contributed to this report.

Source link