June 9, 2023
Ian J. Stewart
The following is an excerpt from the Bulletin of the Atomic Scientists.
OpenAI, the company behind Chat GPT and an apparent advocate for strong regulation in the artificial intelligence space, recently suggested that an International Atomic Energy Agency-like model may be needed for AI regulation.
On the face of it, the IAEA might seem like a reasonable model for AI regulation. The IAEA system of safeguards has developed over time and provides confidence that safeguarded materials are not diverted to weapons-related end uses and that safeguarded nuclear facilities cannot be used for weapons development. But the nuclear governance model is actually not a good one for regulation of Artificial General Intelligence.
While one might argue that both nuclear weapons and artificial intelligence have the potential to destroy the world, the modality and pathways of this catastrophe for AI are not as clear as for nuclear technology. While many focus on the idea that an AI could somehow develop or take over military capabilities, including nuclear weapons (e.g. Skynet), the existence of credible paths through which AI could destroy the world have not yet been established. Work to identify such paths is underway and must continue.
Nonetheless, given that the imperative of addressing urgent global threats has driven the evolution of nonproliferation and safeguard measures, a lesson from the nuclear domain is that consensus around the credibility and definition of a global challenge is necessary before states will take collective action to address it.
Continue reading at the Bulletin of the Atomic Scientists.