November 18, 2024
Dr. Natasha Bajema, CNS Senior Research Associate
Listen to the Interview
Download the .MP3
Get an insider’s perspective on the alarming intersection of AI and chemical weapons in our exclusive interview in which Dr. Natasha Bajema talks to Dr. Ryan Stendall, a co-author of the groundbreaking article: “How might large language models aid actors in reaching the competency threshold required to carry out a chemical attack?”
In the interview, Stendall, a chemist and AI safety researcher, sheds light on the potential of AI models like ChatGPT to lower the barriers to chemical weapon development and use. He argues that chemical weapons are the “lowest hanging fruit” for malicious actors compared to nuclear or biological weapons due to their relative simplicity and accessibility.
The interview explores how malicious actors could leverage both traditional chatbot interfaces and APIs to facilitate various stages of a chemical weapons attack. Stendall shares alarming examples of how he and his co-authors were able to extract sensitive information from AI models. While models like ChatGPT refused to provide information on well-known agents like Sarin, they readily offered detailed and accurate instructions for synthesizing a similarly potent analog called Cyclosarin with a simple prompt. This highlights the limitations of current guardrails and the need for proactive measures to address the evolving capabilities of AI models.
Looking ahead, Stendall emphasizes the importance of:
- Comprehensive pre-release evaluations of LLMs
- Regulation of chemical LLM agents
- Robust safety norms within the open-source community
- Training for scientists on AI and chemical security
This interview offers a sobering look at the emerging threats posed by AI in the realm of chemical weapons. By understanding these risks and taking proactive steps to mitigate them, we can harness the power of AI for good while safeguarding against its potential for harm.
Stay tuned to gain valuable insights from this discussion and understand how AI might impact the future of WMD proliferation. Don’t miss out on this opportunity to learn more about these pressing issues.
Be sure to read the article on The Nonproliferation Review. Visit the Taylor & Francis website today for open access to groundbreaking research that shapes our understanding of AI and global security. Also note that we have announced a call for articles that aid our understanding the nexus between AI and WMD. Please submit your manuscript to The Nonproliferation Review by January 15, 2025.