Nuclear deterrence served as the guarantor of peace during the longish Cold War. The template in which nuclear deterrence operated during the Cold War differs significantly from the contemporary one. The difference in the template could be observed on the technological grounds and the corresponding evolution in the doctrines and strategies. As we move further into the 21st century, manifestations of the fourth industrial revolution, observed profoundly on the grounds of the increasing salience of Artificial Intelligence (AI), are impacting our decision-making cycle in multitudes. Two main aspects of the AI-driven technological revolution are Machine Learning (ML) and Automation. Machine learning is a technique of software programming that aids the development of AI-guided applications. Automation is the ability of a machine to perform tasks on its own without human input.
Adding AI to the nuclear decision-making cycle, delivery systems, and even in the arms control domain may yield stabilizing and destabilizing effects on strategic stability.
As validated by the orthodox deterrence theory, Assured Second Strike Capability (ASSC) is essential to maintaining an effective deterrence. Fielding of SSBNs is considered the most reliable way of retaliation. It is difficult to identify a submarine operating under the unending seas, although remote sensing technologies could do a wonder in this regard. AI-enabled surface and underwater detectors can be deployed to overcome the existing hurdles in the detection of submarines. Extending this line of argument further, if one state has developed advanced remote sensing technologies and successfully fields surface and underwater AI-enabled detectors, it has the capability to kill a submarine and launch a comprehensive first strike capability. This splendid strike aiming to decapitate the adversary is synonymous with the failure of deterrence. Deterrence operates when two adversaries have developed a state of Mutual Assured Destruction (MAD).
Deterrence operates at three levels: strategic, conventional, and sub-conventional in an interconnected fashion. Incorporation of AI in any of the nuclear or conventional systems at any of the levels will impact the deterrence equation. AI-driven systems operate at machine speed. Weapon systems with a speed beyond human comprehension would have unimaginable tactical and strategic outcomes. Two states that have algorithm-driven automated self-defense; their crisis dynamics would entirely depend on the unpredictable behavior of algorithms caught in competition with each other. What if the programmers embedded a function of anticipatory self-defense in those algorithms; triggering a possibility of pre-emptive or decapitating strikes?
Escalating to de-escalate is a popular strategy opted for in asymmetric conflicts. Control over the rungs of escalation gives one advantage over the adversary. Perceived battlefield advantages in deploying AI-aided weapons in asymmetric conflicts may be numerous, however as one takes a deep look, disadvantages are evident. Human commanders are accustomed to understanding the conflict at a pace that is much slower compared to AI-enabled crises. If commanding officers pitch AI-aided weapons to exploit the asymmetry and gain tactical objectives while reducing the time frame of the OODA (Observe, Orient, Decide, and Act) loop, initially it will turn the equilibrium in their favor but eventually, it will back-fire. The moment AI-aided weapons start to engage and kill hostile targets at a machine’s speed, the overall pace of the conflict is fast-tracked multiple times, thus compromising one’s own OODA loop. In such a situation commander will not be able to develop a holistic understanding of the conflict and may take specific tactical measures having strategic implications. In the end, one might lose the conflict.
Incorporating AI into the NC-3 yields interesting and sometimes indiscernible implications for strategic stability. Automating early warning systems completely or partially may not strengthen deterrence.
Automation brings speed to the decision-making cycle I.e., from Identification to releasing the weapons to engage the incoming vector. Autonomous systems have such algorithms that can process a range of complex data values and patterns and come up with responses that are tailored fit according to a given situation. However, excessive reliance on such automated systems creates a false sense of security. Humans eventually design algorithms, and their computational prowess indirectly depends on the data input based on human anticipations in evolving situations. In this sense, excessive reliance on fully automated Early Warning Systems (EWS) might temper your response and thus complicate the issue of nuclear signaling. In any of the contingencies discussed above, EWS might confuse any incoming flying object with a missile, and issue a warning, based on which a state actor might launch its nuclear weapons. Therefore, it is of utmost importance to have a human say in this automation process.
Much has been discussed above to highlight the destabilizing effects of AI on nuclear stability with reference to crisis stability and escalation control. It is high time to look at the domains in which the incorporation of AI might strengthen the deterrence equation or the crisis stability.
Wargaming is an integral facet of the nuclear decision-making cycle. It is a simulation exercise in which military planners learn to act in evolving situations while relying on the latest technologies and corresponding strategies. The essence of this process is to contemplate future strategies while acting hypothetically in anticipated situations and crafting varying responses. However, the war gaming programs used for these simulation exercises have their own limitations, as the modules rely excessively on historical experiences, subjective biases, and institutional preferences. Incorporation of AI-driven algorithms into war gaming programs might enable military planners to anticipate a variety of situations. Such algorithms will help practitioners deal with situations that are more realistic in their outlook and comprise numerous variables.
If military planners are subjected to decision-making in such complex anticipated environments, their response cycle gets maturated. They can act in evolving situations with the latest technologies and might develop a new doctrine and strategy.
The application of AI in the arms control domain leads to a very interesting line of argument: AI as the object of arms control or AI as a facilitating agent for arms control. AI could be instrumental in the arms control domain as a facilitating agent. AI-driven algorithms will provide a sound technical framework for strengthening the verification regimes of numerous arms control agreements. In this regard two aspects of this technology are considered: object identification and pattern recognition.
Object identification could be a game changer when one must analyze large data sets and trace the potential threats to nuclear deterrents with great precision. The application of object identification in satellite imagery will improve its output significantly. Accounting for the difficulties in detecting mobile missile sites, and getting accurate and timely information about the latest missile developments, this technology will address such issues comprehensively.
Applications that can do pattern recognition look for anomalies in large data sets. For instance, the large swathes of the data received under the various arms control agreements could be subjected to a pattern recognition analysis. In a similar sense data received under one specific bilateral agreement could be assessed while pitching it with another data set from other agreements for a pattern recognition analysis. In such a manner, one could do a holistic analysis to chalk out anomalies.
Artificial Intelligence AI contains a lot of potential benefits and inherent risks for nuclear stability. Mapping out the potential benefits and risks depends on the regional dynamics I.e.; the state of technological developments vis a vis two states and the power dynamics at large.
The author is pursuing M.Phil in Defense & Strategic Studies from Quaid-i-Azam University Islamabad. He holds certifications from the Stimson Centre, the EU Non-proliferation Consortium, the United Nations Office of Disarmament Affairs, and the International Atomic Energy Agency. He has contributed his writings to national and international military modernization and nuclear strategy platforms. He tweets @PakAllabove