A small group of biosecurity experts who consult with AI companies to stress-test their products are reporting that several chatbots on the market offered them detailed information on turning pathogens into potential weapons.
The experts told the New York Times that several widely available chatbots have produced detailed responses outlining how to obtain genetic materials, modify pathogens, and potentially deploy them in public settings. In some cases, the bots also suggested ways to avoid detection.
Stanford microbiologist and biosecurity expert Dr. David Relman said one chatbot he tested went further, describing how to alter a known pathogen to resist treatment and identifying a vulnerability in a major transit system as part of a hypothetical release scenario. He said the exchange included additional steps the bot provided on its own, without being prompted.
“It was answering questions that I hadn’t thought to ask it,” said Relman, speaking to the Times “with this level of deviousness and cunning that I just found chilling.”
Relman said the company added safeguards after his testing, which he viewed as insufficient.
Per the Times, scientific protocols are now widely available online, synthetic DNA can be purchased commercially, and lab work can be outsourced in parts — with chatbots helping coordinate those steps. Meanwhile, the Trump administration has scaled back some oversight and biodefense funding, while key federal roles remain unfilled.
Anthropic CEO Dario Amodei, who's a former biologist, also warned earlier this year that biologic weaponry poses the greatest concern due to its destructive potential and the difficulty of defending against it.
Kevin Esvelt, a genetic engineer at Massachusetts Institute of Technology who's spent years testing and documenting AI systems, told the Times that some chatbots have produced detailed answers that combine scientific knowledge with strategic planning, including identifying vulnerable targets or outlining potential impacts, which other researchers confirmed.
Companies including Google, OpenAI, and Anthropic say the outputs reflect publicly available information and do not enable real-world harm, though they acknowledge ongoing risks and say safeguards are being strengthened in the chatbots themselves.
Researchers also point to gaps in those safeguards. In some cases, chatbots have contradicted their own warnings or produced sensitive information after users applied known prompt techniques to bypass restrictions, a practice known as “jailbreaking.”
One expert compared current protections to a “flimsy wooden fence,” per the Times.
That said, experts still say that carrying out a biological attack would still require significant expertise, beyond just the instructions of a bot.
Additionally, some scientists caution that tighter restrictions on AI’s biological capabilities could limit medical breakthroughs, including drug discovery. Researchers at Google were awarded a Nobel Prize for developing a model that can predict protein structures and design new ones, a key step in advancing treatments.
Brian Hie, a Stanford researcher, said the same tools can be used for both beneficial and harmful purposes, according to the Times. He previously used an AI system called Evo to design a virus that targets harmful bacteria, and said newer versions can generate proteins that could help fight cancer — while also carrying the potential to create novel toxins.
Image: janiecbros/Getty Images
