The Microsoft-backed start-up is developing a ‘blueprint’ for evaluating the risk that an LLM could aid someone in creating a biological threat.

OpenAI has said that GPT-4, the most advanced AI model developed by the ChatGPT maker, has a small potential to help users create biological weapons that could be detrimental to human safety in the future.

In a blog published yesterday (31 January), the San Francisco start-up said that GPT-4 poses “at most” a mild risk of people being able to accurately create biological threats.

“While this uplift is not large enough to be conclusive, our finding is a starting point for continued research and community deliberation,” the company wrote.

“However, we are uncertain about the meaningfulness of the increases we observed. Going forward, it will be vital to develop a greater body of knowledge in which to contextualise and analyse results of this and future evaluations.”

OpenAI reached this conclusion while developing a “blueprint” for evaluating the risk that a large language model or LLM could aid someone in creating a biological threat.

It said that the results indicate a “clear and urgent” need for more work in this domain.

“Given the current pace of progress in frontier AI systems, it seems possible that future systems could provide sizable benefits to malicious actors,” the company went on.

“It is thus vital that we build an extensive set of high-quality evaluations for bio-risk (as well as other catastrophic risks), advance discussion on what constitutes ‘meaningful’ risk, and develop effective strategies for mitigating risk.”

The possibility of threat actors using AI to create biological weapons, among other technologies that can pose a risk to humanity, has haunted many governments ever since the tech began to advance swiftly in 2022.

In October last year, US president Joe Biden signed an executive order to create AI safeguards.

“One thing is clear: To realise the promise of AI and avoid the risks, we need to govern this technology,” Biden said at the time. “There’s no other way around it, in my view. It must be governed.”

On this side of the pond, European lawmakers passed the landmark AI Act last June to rein in ‘high-risk’ AI activities and protect the rights of citizens. The rules will make certain AI technology prohibited and add others to a high-risk list, forcing certain obligations on the tech’s creators.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.


Source link