ChatGPT creators building 'early warning system' for AI biological weapon – The Independent
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in
Find your bookmarks in your Independent Premium section, under my profile
This illustration photograph taken on October 30, 2023, shows the logo of ChatGPT, a language model-based chatbot developed by OpenAI, on a smartphone in Mulhouse, eastern France
OpenAI, creators of ChatGPT, says it is building an early warning system that would alert people if artificial intelligence was able to make a biological weapon.
But artificial intelligence only appears to pose a slight risk of helping people create such threats, it has said.
That is according to OpenAI’s latest tests to understand how much risk large language models, such as those that power ChatGPT, could create biological threats.
But it cautioned that the finding was not enough to be conclusive – and that further work must be done to understand the true threat posed by artificial intelligence.
The work is part of OpenAI’s broader plan for its “Preparedness Framework”, which aims to evaluate AI-enabled safety risks. It reported part of that early work in an attempt to gather broader input, it said.
As part of the study, it looked at ways that humans might be able to use AI to gather information on creating biological weapons. It looked at ways that, for instance, they could trick an AI model into giving up information about the ingredients of a biological weapon.
Humans were given tasks that would model their ability to create a “biothreat”. They were then evaluated on the basis of how successfully they would be able to actually do it.
The researchers found that the people with access to such a system were slightly more successful. But that was not necessarily significant enough to be meaningful.
The researchers nonetheless concluded that it was relatively easy to get information about biothreats – without even needing to use artificial intelligence. There is far more information available online, they noted.
The OpenAI researchers also noted that it is expensive to do such tasks, and that more work it is to be done to better understand biorisks – such as how much information is actually needed.
Join thought-provoking conversations, follow other Independent readers and see their replies
This illustration photograph taken on October 30, 2023, shows the logo of ChatGPT, a language model-based chatbot developed by OpenAI, on a smartphone in Mulhouse, eastern France
AFP via Getty Images
Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in
Log in
New to The Independent?
Or if you would prefer:
Want an ad-free experience?
Hi {{indy.fullName}}