May 1, 2020
James Johnson
Will the use of artificial intelligence (AI) in strategic decision making be stabilizing or destabilizing? What are the risks and trade-offs of pre-delegating military force (or automating escalation) to machines? How might non-nuclear state and non-state actors leverage AI to put pressure on nuclear states?
James Johnson’s article in the Journal of Strategic Studies analyzes the impact of strategic stability of the use of AI in the strategic decision-making process, in particular, the risks and trade-offs of pre-delegating military force (or automating escalation) to machines. It argues that AI-enabled decision support tools, by substituting the role of human critical thinking, empathy, creativity, and intuition in the strategic decision-making process, will be fundamentally destabilizing if defense planners come to view AI’s “support” function as a panacea for the cognitive fallibilities and human analysis and decision making. The article also considers the nefarious use of AI-enhanced fake news, deepfakes, bots, and other forms of social media by non-state actors and state proxy actors, which might cause states to exaggerate a threat from ambiguous or manipulated information, increasing instability.
James Johnson is a postdoctoral research fellow at the James Martin Center for Nonproliferation Studies (CNS) at the Middlebury Institute of International Studies, Monterey. James holds a PhD in Politics and International Relations from the University of Leicester, where he is also an honorary visiting fellow with the School of History & International Relations. He is the author of The US-China Military & Defense Relationship During the Obama Presidency (New York, NY: Palgrave, 2019).