The U.S. government has a “clear and urgent need” to act as swiftly developing artificial intelligence (AI) could potentially lead to human extinction through weaponization and loss of control, according to a government-commissioned report.

The report, obtained by TIME Magazine and titled, “An Action Plan to Increase the Safety and Security of Advanced AI,” states that “the rise of advanced AI and AGI has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.”

“Given the growing risk to national security posed by rapidly expanding AI capabilities from weaponization and loss of control — and particularly, the fact that the ongoing proliferation of these capabilities serves to amplify both risks — there is a clear and urgent need for the U.S. government to intervene,” read the report, issued by Gladstone AI Inc.

The report suggested a blueprint plan for intervention that was developed over 13 months during which the researchers spoke with over two hundred people, including those from the U.S. and Canadian governments, major cloud providers, AI safety organizations, and security and computing experts.

NVIDIA FACES LAWSUIT FROM AUTHORS OVER ALLEGED COPYRIGHT INFRINGEMENT IN AI MODELS

Artificial intelligence

The report states that the rise of advanced AI could lead to the destabilization of global security similar to the introduction of nuclear weapons. (Reuters / Dado Ruvic / Illustration / Reuters Photos)

The plan begins with establishing interim advanced AI safeguards before formalizing them into law. The safeguards would then be internationalized.

GOOGLE RELEASES NEW GEMINI UPDATE TO GIVE USERS ‘MORE CONTROL’ OVER AI CHATBOT RESPONSES

robot hand reaching through computer to stock charts

The report recommends limiting the computing power of AI and outlawing processes such as open-source licensing to keep the inner workings of powerful AI models secret. (iStock / iStock)

Some measures could include a new AI agency putting a leash on the level of computing power AI is set at, requiring AI companies to get government permission to deploy new models above a certain threshold and to consider outlawing the publication of how powerful AI models work, such as in open-source licensing, TIME reported.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

The report also recommended the government tighten controls on the manufacture and export of AI chips.

Source link