The EU’s AI Act is set to enter force next month after receiving a final rubber-stamp from the European Council.
Ministers today endorsed a political deal on the landmark law, billed as the world’s first comprehensive rules on AI.
The law applies a risk-based approach to regulation. The strictest restrictions only apply to “high-risk” systems, from cars to law enforcement tools. Deployments designated “unacceptable” — such as social credit scoring — will be banned altogether.
Although the EU set these rules, they will apply to any company that provides services or products within the bloc. That’s caused alarm in Silicon Valley.
It’s also made the EU a global leader in AI governance. Belgium’s digitisation minister, Mathieu Michel, described the final approval as a “significant milestone” for the union.
“With the AI Act, Europe emphasises the importance of trust, transparency, and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation” Michel said in a statement.
Yet not everyone shares his optimism.
Concerns continue swirling around the AI Act
Digital rights campaigners warn that the rules won’t protect the public, while tech firms fear they’ll impede innovation.
The true impact will soon start to emerge. In the coming days, the act will be published in the official journal of the EU. Twenty days later, the act will finally enter force.
“This is a big day for the EU and for AI policy, but the work isn’t done here,” said Maximilian Gahntz, AI Policy Lead at Mozilla Foundation.
“Actually, this is where much of the work starts when it comes to spelling out what all of this will mean in practice and building up strong, capable enforcement regulators to enforce these rules.
“And we need the AI Office to closely involve civil society and the open-source community as it implements the new rules.”