‘AI and the Bomb: Nuclear Strategy and Risks in the Digital Age’ is divided into five chapters, with separate additional introduction and conclusion chapters. In the introductory chapter, Dr. Johnson first depicts a hypothetical scenario of the future where AI has been inculcated into the Nuclear Command and Control systems of the USA and China, this blind trust in AI to make fully autonomous decisions related to nuclear exchange then results in a nuclear flash war between the two over Taiwan, leaving millions dead and incurring heavy losses.

The book adopts a two-fold thesis. The first aspect looks at how rapid advancements in AI and technology are revolutionizing technology management.

The second aspect looks at the risk of nuclear use, specifically how AI in nuclear systems might exacerbate the existing tensions between historical adversaries and increase the risk of inadvertent or accidental nuclear use.

For this purpose, the chapter first examines narrow military AI and implications of technology such as speech recognition, computer vision, and machine learning on military systems including Intelligence, Surveillance and Reconnaissance (ISR), Nuclear Command and Control (C2), nuclear and non-nuclear missile delivery systems and conventional counterforce capabilities.

It then subsequently highlights the rapid development trajectory of AI technology and its impact on the nuclear enterprise as well as certain shortcomings of current AI technology including the vulnerability to manipulation, inability to adapt to minor changes in the environment, weaknesses in automated image detection capabilities, and the like.

Chapter 1 titled ‘Strategic Stability: A Perfect Storm of Nuclear Risk?’ looks at the concept of strategic stability which is the incentive or disincentive that exists to encourage or prevent an actor from engaging in provocative, escalatory behavior, other linked concepts i.e. first strike stability, crisis stability and arms race stability are also analyzed.

The author argues that strategic stability rests on the political, economic, and military dynamics of States where technology plays a vital role as a balancer, an equalizer and as an agent of change.

Dr. Johnson then adds AI to the debate by highlighting how asymmetric AI military technologies could usurp the balance of power between States, catalyze an arms race, and result in crisis instability and unintentional escalation. While measures such as arms control, confidence building measures, and regional alliances might be implemented to clamp down on any escalation, they might be ineffective.

The author, however, does reiterate that the shortening of military decision-making, increased speed of warfare, and co-mingling of military capabilities have occurred due to the wider advancement of technology and not just AI solely, he terms AI as a manifestation or product of emerging technology as opposed to the origin or cause.

In the latter part of the chapter, the author takes a deep dive into certain destabilizing features of AI and how it could exacerbate escalation. This includes the lack of human involvement and judgment in AI-driven systems resulting in false alarms and accidents, biases in AI-machine learning and the subsequent flawed judgment and assumptions, as well as the susceptibility of AI enabled technology to cyber-attacks. These challenges in military AI coupled with the nuclear multipolarity and the lack of trust regarding US-China risk tolerance and nuclear thresholds can exacerbate risks.

The author believes it is important for military powers to enter into confidence-building measures (CBMs), incorporate international frameworks for AI regulation in the military domain, and establish a governance architecture for the responsible use and development of AI and autonomy in the military domain.

Chapter 2 titled ‘Nuclear deterrence: new challenges for deterrence theory and practice’ looks at the threat that AI technology and AI-enhanced autonomous weapons could pose to nuclear deterrence. The author outlines the definition, concept, and assumptions of deterrence theory that emerged in the Cold War era, the author then highlights that it is essential that these concepts of deterrence should be revised in this era where AI and other digital technologies have taken over.

Here, the author introduces the concept of the fifth wave of post-classical deterrence theorizing, in which experts examine how introducing non-human agents or removing human agents can impact strategic stability and deterrence.

Dr. Johnson reiterates that in the era of nuclear multipolarity, AI and autonomy would endanger the concept of nuclear deterrence and increase the probability of inadvertent nuclear use.

He further analyses how aspects such as human perceptions, human and machine interactions, and AI-based systems are incorporated into force structures and doctrines and how the ethical, moral, and value-laden norms associated with AI and deterrence would not only determine the future use of AI in the nuclear deterrence architecture but its implications on either worsening or strengthening deterrence as well.

The chapter ends with a few recommendations such as increasing transparency, countering the threat of non-state actors using AI-enabled tools to harm NC3 systems, transitioning to a deterrence-only military posture, and reducing the number of nuclear weapons to combat the above-mentioned challenges.

Chapter 3 titled ‘Inadvertent escalation: A new model for nuclear risk’, analyses the concept of escalation and escalation theorizing along with the Escalation ladder model introduced by Hermann Kahn during the Cold War, Dr. Johnson argues that the psychological underpinnings associated with escalation theorizing will impact how AI-enabled technology is perceived to either increase or decrease inadvertent risk of escalation.

Three scenarios of potential escalation with regards to the AI nuclear nexus are then provided including the inculcation of AI technology into conventional weapon systems and the associated risk of counterforce attacks e.g. cyber-attacks, the potential of AI and other technology in the digital ecosystem to spread disinformation, misperceptions, and cognitive bias and lastly, the heightened fog of war such as complexity, uncertainty and complex decision making that could prove to be a consequence of the introduction of AI-infused technology and might result in spiraling of escalation.

Recommendations to stem escalation include arms control agreements, verification regimes, bilateral and unilateral agreements, and dialogue. Chapter 4 ‘AI security dilemma: insecurity, mistrust, and misperception under the nuclear shadow’, subsequently looks at how structural and non-structural dynamics like dual-use technology, psychological factors, and offense-defense balance might change due to the inculcation of AI, this might in turn increase the AI security dilemma dynamics.

The author looks at the US-China relationship and how existing challenges courtesy of dual-use technology and the offense-defense capabilities will further be exacerbated if AI comes into the equation.

Lastly, the chapter looks at how the idiosyncrasy of political leaders’ types of regimes i.e. autocratic or democratic coupled with the introduction of AI-enabled technology would impact the security dilemma faced by States.

The 5th Chapter titled ‘Catalytic nuclear war: The new “Nth country problem” in the digital age?’ examines how, in the AI age the issue of the Nth country and a catalytic war between two nuclear weapon-possessing countries initiated by a third-party State or non-state actor could take place. Factors such as information overload, social media manipulation, fake news, disinformation, and increased automation of nuclear command and control systems (NC3) could result in a catalytic nuclear war.

According to the author, this could be avoided by taking measures such as strengthening the security of the NC3, revamping existing NC3 protocols, carrying out intelligence-sharing activities, and agreeing to avoid using cyber capabilities against the nuclear command and control amongst others.

The author suggests that instead of restricting AI use, AI could be used for good in the military domain, for example, by reducing boots on the ground, carrying out intelligence gathering, identifying shifts in news data coverage, and designing war games for military personnel and commanders.

To address AI technology’s political, ethical, and associated challenges, a framework for controlling this technology must be established, bilateral and multilateral agreements must be enforced, and major stakeholders must be brought on board when discussing AI and subsequent challenges in the military domain.

The book is one of its kind in providing a comprehensive overview of the strategic implications of Artificial Intelligence in the military domain, it provides an in-depth analysis of all the potential implications of Artificial Intelligence for any Nuclear escalation and ways in which this can be minimized. The book would definitely serve to be an intriguing read for readers interested in knowing more about this emerging technology and its influence on military applications.

Print Friendly, PDF & Email