November 21, 2023
Over the course of 2023, the peril and promise of artificial intelligence (AI) came to the forefront of public discourse across many policy sectors, including international security and nonproliferation. The release of Chat GPT vividly and publicly demonstrated the potential of AI as a tool. The UK government’s AI summit, a White House Executive Order, and dire warnings issued by industry leaders highlight potential perils. Many experts have compared the promise and perils of AI to that of the atom, with Microsoft’s Open AI even suggesting that an International Atomic Energy Agency (IAEA) is needed to organize global governance of AI.
As AI becomes integrated into domains related to WMD, AI-enabled tools will likely impact on the acquisition and employment of WMD. They will also expand the toolbox for nonproliferation. Despite the serious implications of AI, hype has yet to yield significant empirical research designed to answer fundamental questions about the positive and negative applications of these tools.
CNS is examining the nexus of AI and WMD nonproliferation from several perspectives informed by a core group of staff with deep expertise on a variety of relevant issues. While the scope of CNS efforts will depend on the availability of external research funding, this article provides a snapshot of current CNS activities and identifies some issues for future research.
AI Safety, Alignment, and Global Governance
Over the past several years, experts have expressed concerns about the negative implications of AI for humanity, raising issues of safety, alignment, and the need for global regulation. Touching upon critical issues of quality assurance, disinformation, human supervision, and privacy, many of the near-term worst-case scenarios for AI are more mundane than the hypothesized emergence of superintelligence, but no less important.
- Ian Stewart, Executive Director of the DC Office, examines the international regulatory debate concerning AI through the lens of the history and evolution of nonproliferation regimes and has recently concluded that the IAEA does not offer the best model for mitigating risks. He has also examined the implications of AI and other emerging technologies for strategic trade controls.
- Ferenc Dalnoki-Veress, Robert Shaw, and Miles Pomper authored a comprehensive report on using additive manufacturing of weapons of mass destruction (WMD) components. The report highlighted the potential risk posed by generative artificial intelligence in simplifying the construction of WMD components.
- Nomsa Ndongwe, Research Fellow, has spent several years working in Geneva at the Convention of Certain Conventional Weapons (CCW) to address the risks posed by lethal autonomous weapons and shaping guiding principles. She continues to explore the role of global governance in constraining the risks posed by AI.
- Hyuk Kim, Research Fellow, explores the technical efforts by North Korea and China to harness machine learning for potential military and dual-use applications, such as wargaming simulations and nuclear reactor operations.
- CNS Research Assistant Yanliang Pan has conducted research on China’s approach to AI governance.
- CNS experts are also in discussion with leading tech firms and regulators about what alignment in the nonproliferation context might look like. Specifically, this includes engaging in red-teaming and evaluating the potential of models for enhancing CBRN capabilities.
AI, Strategic Competition, and Nuclear Deterrence
AI has major implications for strategic competition among the United States, China, and Russia and continues to shape the security environment underlying nuclear deterrence.
- Ian Stewart has led several projects that examine Chinese and Russian use of machine learning in strategic and military programs including examining this from an export control perspective.
- Natasha Bajema, Senior Research Associate, has conducted extensive research on the impact of emerging technologies on nuclear decision-making and the mindset of nuclear decision-makers and examined the transparency implications of AI for nuclear deterrence. She is exploring ways to raise awareness among NATO nuclear decision-makers about these new risks and strengthen the resilience of deterrence.
- Jeffrey Lewis, Director of the East Asia Nonproliferation Program, and Natasha Bajema are also exploring the impact of AI on command and control and deterrence issues.
- Hyuk Kim examines the intersection of machine learning and nuclear diplomacy with a focus on North Korea.
- CNS Research Assistant Yanliang Pan has conducted research comparing and contrasting Chinese, Russian, and US approaches to the development and use of generative AI.
AI and Biosecurity
In recent years, advanced biotechnologies such as gene sequencing, gene synthesis and bioinformatics have transformed the life sciences into a branch of information technology through the generation and application of genomic data. This trend suggests powerful interactions between biotechnology and AI with implications for both biosecurity and nonproliferation.
- Allison Berke, Director of the Chemical and Biological Non-proliferation Program, is conducting research assessing the risks of AI-enabled biodesign tools and their capacity to be used to design new toxins and pathogens.
- Together, Allison Berke and Natasha Bajema are also examining the question of whether LLMs could assist proliferating states or nonstate actors in manufacturing bioweapons, leading to a decline in the requirement for tacit knowledge. Separately, they are exploring the question of whether specialist tools such as Alpha Fold have the potential to assist in toxin and pathogen design and under what circumstances.
AI as a Nonproliferation Tool
AI has the potential to become a powerful nonproliferation tool, enabling open-source intelligence capabilities across a broad range of state and non-state actors. It allows analysts to identify subtle changes in state behavior, detect shifts in nuclear postures, and contribute to treaty verification. CNS is working to examine AI’s potential contribution in several different ways.
- Ian Stewart focuses on the question of how AI might contribute to various nonproliferation workflows. This work involves experimentation with the use of LLMs alongside other forms of machine learning for textual and open-source analysis.
- Jeffrey Lewis leads a team of nonproliferation analysts in monitoring and tracking the latest nuclear developments using innovative open-source intelligence approaches and leveraging access to commercial satellite imagery.
- CNS scientist in residence, Ferenc Dalnoki-Veress, examines the potential of agent-based models for nonproliferation. This work includes exploring whether agent-based models can contribute to arms control negotiations and arms control verification activities.
- William Potter, CNS Founder and Director, is examining the potential applications of AI for pedagogical purposes in the realm of nuclear nonproliferation.
To support these efforts, CNS is undertaking a number of specific activities. CNS will create a dedicated working group and section on its website to feature research and commentary on the nexus between nonproliferation and export controls.
To stay in touch with CNS work on AI, please contact Ian Stewart or Natasha Bajema for more information.