Chatbots Kill: In 2017, the Pentagon established Project Maven to apply machine learning (ML) technology to identify targets in real-time combat situations. The program has now seemingly been turned into a proper war tool, though members of the US military vow there’s still a human pulling the final trigger.
Since early February, the US Department of Defense has employed ML algorithms to identify targets for over 85 air strikes in Iraq and Syria. According to Schuyler Moore, CTO for the United States Central Command (CentCom), the Pentagon began using AI technology in actual battle situations after Hamas terrorists attacked Israel on October 7, 2023.
The terrorist’s surprise attack changed everything, Moore told Bloomberg, as the DoD finally decided to deploy the AI algorithms developed by Project Maven. The US military immediately began doing things it had never done with AI warfare technology.
“October 7th everything changed,” Moore said. “We immediately shifted into high gear and a much higher operational tempo than we had previously.”
Developers designed Project Maven’s algorithms to work from video footage captured by US drones, helping detect soldiers or other potential airstrike targets. Since February 2, CentCom has identified and destroyed enemy rockets, missiles, drones, and militia facilities with Maven AI.
Moore tried to demystify the new object recognition algorithms’ alleged “killing” capabilities, claiming that every step involving AI ends with human validation. The CentCom also used an AI recommendation engine that suggested attack plans and the best weapons to use during operations. However, the results weren’t up to human standards.
Project Maven has proven to be a very controversial topic in recent years. Google exited the program after facing significant employee backslash, but other companies were more than happy to keep working on AI warfare with Pentagon officials.
As the AI-infused airstrikes revealed by Moore confirm, the DoD is now willing to push forward with deploying “intelligent” technology on the battlefield. The Pentagon is seemingly already working on integrating large-language models (LLM) in actual combat decisions.
Craig Martell, the DoD Chief Digital and AI Officer, recently said that the US could fall behind adversaries if it doesn’t adopt generative AI models in warfare operations. Of course, the US government must conceive the proper “protective measures” and mitigations for national security risks, preventing and dealing with issues that could arise from poorly managed training data.