The UK’s AI Safety Institute said its Inspect platform will help testers around the world to evaluate AI models.
The UK has released its own safety testing platform to help organisations around the world develop safe AI models.
The country’s AI Safety Institute said this platform – called Inspect – is a software library that lets testers assess the capabilities of AI models and produce scores on various criteria based on the results. Inspect has been released as an open-source platform for global testers, such as AI start-ups, researchers and governments.
The UK Safety Institute said Inspect can evaluate AI models in various areas such as their core knowledge, ability to reason and autonomous capabilities. The organisation said the platform has been released at a crucial time due to the anticipated creation of more powerful AI models.
The UK government said the global release of this platform will allow for a consistent approach to AI safety evaluations. UK secretary of state Michelle Donelan said the platform also puts “UK ingenuity” into the global effort to make AI safe and “cements our position as the world leader in this space”.
“The reason I am so passionate about this, and why I have open sourced Inspect, is because of the extraordinary rewards we can reap if we grip the risks of AI,” Donelan said. “From our NHS to our transport network, safe AI will improve lives tangibly – which is what I came into politics for in the first place.”
The announcement comes a month after the UK and US agreed to a collaboration deal between their safety institutes, to develop common testing approaches for AI models. UK AI Safety Institute chair Ian Hogarth said the successful collaboration on AI safety testing means “having a shared, accessible approach to evaluations”.
“We hope to see the global AI community using Inspect to not only carry out their own model safety tests, but to help adapt and build upon the open-source platform so we can produce high-quality evaluations across the board,” Hogarth said.
Amanda Brock, the CEO of open technology company OpenUK, said the success of this new platform will be measured by the number of the companies who have “already committed to their AI platforms being tested who actually start to go through this process”.
“With the UK’s slow position on regulating – which I agree with – this platform simply has to be successful for the UK to have a place in the future of AI,” Brock said. “All eyes will now be on South Korea and the next safety summit to see how this is received by the world.”
Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.