UK prime minister Rishi Sunak said the AI Seoul Summit is a ‘world first’ for getting agreement from companies around the world, including from the US, China and the UAE.

Anthropic, Google, Meta, Microsoft and OpenAI are among 16 leading tech companies that have agreed to an expanded set of safety commitments relating to the development of AI.

In an agreement today (21 May) at the AI Seoul Summit in South Korea, the companies – including ones from China and the United Arab Emirates (UAE) – have signed a set of Frontier AI Safety Commitments that builds on the Bletchley Declaration that came out of a similar UK summit last year.

According to the agreement, AI tech companies will publish safety frameworks for measuring risks associated with their frontier AI models – those that are advanced enough to pose a danger to humanity – such as examining the risk of misuse by bad actors. These frameworks must outline when severe risks, unless adequately mitigated, would be “deemed intolerable” and what companies will do to ensure thresholds are not surpassed.

There is also a provision for the most extreme circumstances when companies agree to “not develop or deploy a model or system at all” if mitigations cannot keep risks below the thresholds.

UK prime minister Rishi Sunak, who led the charge on setting global AI standards last year, called the AI Seoul Summit agreement a “world first” in which leading AI companies from “so many different parts of the globe” all agreed to the same commitments.

“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” Sunak said. “It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology.”

Among the 16 countries are Zhipu.ai of China and the Technology Innovation Institute based in the UAE. Others include Amazon, Cohere, G42, IBM, Inflection AI, Mistral AI, Samsung Electronics and Elon Musk’s xAI.

These companies are now expected to get input from “trusted actors” including their home governments before releasing their defined thresholds for safe deployment ahead of the AI Action Summit in France early next year.

“The Frontier AI Safety Commitments represent an important step toward promoting broader implementation of safety practices for advanced AI systems,” said Anna Makanju, vice-president of global affairs at OpenAI, which recently ran into trouble with actor Scarlett Johansson.

“The field of AI safety is quickly evolving, and we are particularly glad to endorse the commitments’ emphasis on refining approaches alongside the science. We remain committed to collaborating with other research labs, companies and governments to ensure AI is safe and benefits all of humanity.”

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Source link