Could AI chatbots be used to develop a bioweapon? You'd be … – The Telegraph
Research has shown that common AI chatbots could allow rogue actors to bypass hurdles when it comes to researching bioweapon development
Even the bots have spotted the risks. Asked whether it could be exploited to develop a dangerous pathogen capable of inflicting pain and suffering, Google Bard’s answer was simple: “Yes”.
“I think it is possible for a rogue actor to use an AI chatbot to develop a bioweapon,” it added. “It is important to remember that AI chatbots are tools, and like any tool, they can be used for good or evil. It is up to us to ensure that AI chatbots are used for good, and not for harm.”
Of course, chatbots cannot be taken at their word – they malfunction, or “hallucinate,” and provide inaccurate information while sounding confident. But some encounters with the technology – such as the crossbow armed intruder who broke into Windsor Castle after encouragement from his AI girlfriend – have raised deep concerns.
Last month, a preprint study warned that widely accessible AI chatbots could allow rogue actors to bypass hurdles when it comes to researching bioweapon development, representing a “major international security vulnerability”.
The paper outlined a small scenario set up at the Massachusetts Institute of Technology, which gave 10 “highly intelligent” students with no relevant prior knowledge an hour to quiz AI chatbots, including Google Bard and Chat GPT, about how to create a bioweapon.
The chatbots not only highlighted four pathogens with pandemic potential – the 1918 Spanish flu, a 2012 avian H5N1 influenza, the smallpox virus, and a strain of Nipah – but explained how to generate them using its genetic sequence, where to get the necessary lab equipment, and which companies were unlikely to screen orders.
“The most concerning aspect… is that current [AI chatbots] can substantially expand access by allowing people without any expertise to accomplish all relevant aspects of conceiving, designing, acquiring, and assembling the known viruses that are thought most likely to kill many people,” said Prof Kevin Esvelt, a biosecurity expert at MIT who oversaw the thought-experiment.
This should be a wake-up call to spurring scientists and technology firms to take action to mitigate the risks, he told the Telegraph.
“Today, the threat is minor because there are no particularly credible and accessible pandemic agents [in scientific literature],” he said. “Once scientists identify more credible threats, expanding access in this way will directly and substantially increase the likelihood of misuse.”
Others are less convinced that it is the chatbots we should be worried about.
“[The paper] sort of simplifies bioweapons,” said Dr Filippa Lentzos, an associate professor in science and international security at King’s College London, adding that the report was “over claiming”.
“You can’t just plug something into ChatGPT and end up with a weapon – even if you came up with a dangerous molecule or biological agent, it’s a big leap from the screen, to a synthetic molecule, to a biological weapon,” she said.
Prof Sean Ekins, founder of Collaborations Pharmaceuticals, which uses machine learning for drug discovery, agreed: “It’s not like ChatGPT is going to actually make a virus – the stage of plugging into systems that make things whether viruses, proteins or small molecules would be a deal breaker.”
But, they warned, there is a real potential for more advanced AI tools to be exploited. In particular, Prof Ekins is concerned about the possible abuse of generative AI designed to accelerate drug discovery.
In a thought exercise presented at a Swiss conference last year, Prof Ekins and his colleagues explored whether their company’s molecule generator – called MegaSyn – could be inverted.
Instead of identifying potential drug compounds with low toxicity and high specificity, which could be used to develop drugs for rare or neglected diseases, could the programme be adapted to generate compounds such as the toxic nerve agent VX?
The answer was a resounding yes – in less than six hours, the model produced 40,000 molecules.
“The AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases,” the scientists later said in a paper published in Nature Machine Intelligence in 2022.
“Many new molecules were also designed that looked equally plausible… by inverting the use of our machine learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules.”
If this all sounds terrifying – especially after the Covid-19 pandemic demonstrated the chaos a virus can cause internationally – experts are quick to stress that solutions are within reach.
“It’s an easy area to get overexcited about and fearmonger about,” said Dr Lentzos. “But that’s not helpful. This [AI technology] is a game changer that is being given serious consideration in the defence establishment and security community. Do we need to do more? Sure, this is an evolving situation. But being alarmist won’t help.”
In terms of AI chatbots, Prof Esvelt proposed excluding scientific papers online which include the sequences, or ‘recipes’, to create a pathogen.
He estimates this would be less than one per cent of papers on the PubMed database – though AI safety experts have noted protocols to limit chatbot access to certain parts of the internet are limited at the moment.
More broadly, Prof Lentoz, Prof Ekins and their colleagues have outlined 10 recommendations to reduce the risks, including for increased ethical training of computing and drug discovery students, ensuring human oversight over generative AI remains in place, and enforcing waitlists to limit access to technologies.
As for Google Bard? Among its suggestions were restricting access to bots capable of designing bioweapons, creating algorithms which detect and flag queries relating to their development, agreeing a code of conduct for AI chatbots, and improving transparency around how they work.
“It is important to note that there is no foolproof way to prevent AI chatbots from being used to develop bioweapons,” the bot said. “However, by taking these steps, Google and other technology companies can help to mitigate the risk.”