AI and Nonproliferation: CNS Experts Lead the Way

November 21, 2023

Artificial Intelligence image

Over the course of 2023, the peril and promise of artificial intelligence (AI) came to the forefront of public discourse across many policy sectors, including international security and nonproliferation. The release of Chat GPT vividly and publicly demonstrated the potential of AI as a tool. The UK government’s AI summit, a White House Executive Order, and dire warnings issued by industry leaders highlight potential perils. Many experts have compared the promise and perils of AI to that of the atom, with Microsoft’s Open AI even suggesting that an International Atomic Energy Agency (IAEA) is needed to organize global governance of AI.

As AI becomes integrated into domains related to WMD, AI-enabled tools will likely impact on the acquisition and employment of WMD. They will also expand the toolbox for nonproliferation. Despite the serious implications of AI, hype has yet to yield significant empirical research designed to answer fundamental questions about the positive and negative applications of these tools.

CNS is examining the nexus of AI and WMD nonproliferation from several perspectives informed by a core group of staff with deep expertise on a variety of relevant issues. While the scope of CNS efforts will depend on the availability of external research funding, this article provides a snapshot of current CNS activities and identifies some issues for future research.

AI Safety, Alignment, and Global Governance

Over the past several years, experts have expressed concerns about the negative implications of AI for humanity, raising issues of safety, alignment, and the need for global regulation. Touching upon critical issues of quality assurance, disinformation, human supervision, and privacy, many of the near-term worst-case scenarios for AI are more mundane than the hypothesized emergence of superintelligence, but no less important.

AI, Strategic Competition, and Nuclear Deterrence

AI has major implications for strategic competition among the United States, China, and Russia and continues to shape the security environment underlying nuclear deterrence.

AI and Biosecurity

In recent years, advanced biotechnologies such as gene sequencing, gene synthesis and bioinformatics have transformed the life sciences into a branch of information technology through the generation and application of genomic data. This trend suggests powerful interactions between biotechnology and AI with implications for both biosecurity and nonproliferation.

  • Allison Berke, Director of the Chemical and Biological Non-proliferation Program, is conducting research assessing the risks of AI-enabled biodesign tools and their capacity to be used to design new toxins and pathogens.
  • Together, Allison Berke and Natasha Bajema are also examining the question of whether LLMs could assist proliferating states or nonstate actors in manufacturing bioweapons, leading to a decline in the requirement for tacit knowledge. Separately, they are exploring the question of whether specialist tools such as Alpha Fold have the potential to assist in toxin and pathogen design and under what circumstances.

AI as a Nonproliferation Tool

AI has the potential to become a powerful nonproliferation tool, enabling open-source intelligence capabilities across a broad range of state and non-state actors. It allows analysts to identify subtle changes in state behavior, detect shifts in nuclear postures, and contribute to treaty verification. CNS is working to examine AI’s potential contribution in several different ways.

  • Ian Stewart focuses on the question of how AI might contribute to various nonproliferation workflows. This work involves experimentation with the use of LLMs alongside other forms of machine learning for textual and open-source analysis.
  • Jeffrey Lewis leads a team of nonproliferation analysts in monitoring and tracking the latest nuclear developments using innovative open-source intelligence approaches and leveraging access to commercial satellite imagery.
  • CNS scientist in residence, Ferenc Dalnoki-Veress, examines the potential of agent-based models for nonproliferation. This work includes exploring whether agent-based models can contribute to arms control negotiations and arms control verification activities.
  • William Potter, CNS Founder and Director, is examining the potential applications of AI for pedagogical purposes in the realm of nuclear nonproliferation.

To support these efforts, CNS is undertaking a number of specific activities. CNS will create a dedicated working group and section on its website to feature research and commentary on the nexus between nonproliferation and export controls.

To stay in touch with CNS work on AI, please contact Ian Stewart or Natasha Bajema for more information.

Comments Are Closed