Deterrence is a psychological tool used to prevent an adversary from taking certain courses of action or refraining from specific military action. The concept of deterrence is best explained by realists on determining its types, components, and requirements. Generally, in traditional concepts deterrence comes in three varieties: punishment, denial, and retaliation. Deterrence by denial convinces the enemy that any action will fail to obtain their objectives. The concept formally introduced after the development of nuclear weapons is evolving with the rise of technology. Traditional deterrence didn’t prevent devastating wars, however nuclear deterrence became a product of mutual fear on either side.

China’s evolving profile as a nuclear-weapons state is compounding India’s security challenges

With the advancement in military technology, India and Pakistan are seeking new technologies and capabilities that dangerously undermine each other’s defence under the nuclear threshold. China’s evolving profile as a nuclear-weapons state is compounding India’s security challenges. Yet control over the drivers of India–Pakistan nuclear-deterrence and stability equation remains almost entirely in the hands of leaders in New Delhi and Islamabad. As Pakistan’s nuclear policy is totally India-centric, so it also advances the arms race between them. Furthermore, the trio’s strong belief that they are still some ways from achieving the kind of nuclear capabilities required to protect their national interests ensures that China, India, and Pakistan will likely continue to expand their nuclear arsenals, albeit at different rates, for many years to come.

 In “The New Era of Counterforce: Technological Change and the Future of Nuclear Deterrence,” the writer argued that dramatic improvements stemming from the computer revolution are rendering nuclear arsenals increasingly vulnerable to attack. Leaps in accuracy have largely negated the strategy of basing nuclear weapons in hardened shelters, and the revolution in remote sensing is eroding the strategy of concealing nuclear forces on land or at sea. Ensuring nuclear retaliation after attack, which is the foundation of robust deterrence, is becoming more difficult.

If we look around the major five nuclear states, nuclear weapons became a part of their policies and national identity

To best understand the concept of Nuclear Deterrence in the age of AI, constructivism serves the best theory for understanding its evolving nature. In South Asia, particularly in the India-Pakistan case narrative building significantly impacts the state behavior and their deterrent strategies. Constructivists emphasize the ideational factors such as norms and identities. States acquire nuclear deterrence to protect their national identity perceived from any military threat. If we look around the major five nuclear states, nuclear weapons became a part of their policies and national identity. Therefore, the ideational factor of a constructivist approach plays a key role in states nuclear polices and deterrent capabilities.

The use of AI in military technology is far more dangerous and is changing the overall dynamics of nuclear war. Former U.S. Secretary of Defense James Mattis expressed interest in AI throughout his tenure, even surmising whether AI might change the fundamental nature of war. Dr. James Johnson, a lecturer in Strategic Studies, describes AI as a dangerous nuclear asset. AI cyber weapons and algorithms could escalate a potential nuclear risk or military war which is diminishing the deterrent value of a state. Johnson also revised the past that how AI tools triggered the nuclear escalation and conflict in the cold war era. AI has advanced the 3Cs (command, control, and communication) of deterrent capabilities, early warning radar systems, and long-range communication satellites.

Decision makers of the state will rely on AI for decision making and recommendations from advanced AI tools 

In the past, the delivery systems used for nuclear weapons were cruise or ballistic missiles, but now bombers are used which are uncrewed and extremely dangerous. Now, an uncrewed drone bomber is used for a patrol as an escalation signal to launch a nuclear attack. It will make broader drone communication links and so would be unreliable. With the advent in AI, there is also a risk of artificial acceleration, in which an enemy’s respective AI system makes decision making by calculating the strategic maneuvers which create a continuous escalating conflict. It is likely in the future that decision makers of the state will rely on AI for decision making and recommendations from advanced AI tools. This will not only make communication less effective but also promote offensive actions with catastrophic consequences.

Another escalating nuclear technology which also falls in the domain of AI is Intelligence, Surveillance, and Reconnaissance (ISR) capabilities. It is the timely conveying of useful military information for decision making purposes. If we take an example of China and Russia, they use mobile missile launchers because it makes it difficult to track. A state which possesses the perfect information on key nuclear arsenals of an enemy, or the locations of its delivery systems will have a considerable advantage. Such instability is prone to lead to expansion of nuclear arsenals, increased escalation on other fronts, and further risk of nuclear conflict undermining the nuclear deterrence.

The fear of an accidental nuclear war will encompass a combination of human-machine interaction failure, organizational, and procedural factors

If we take an account of nuclear weapon states; China and US, and Russia are all developing unmanned platforms of AI integration for deploying nuclear weapons. These include underwater vehicles, spaceplanes, and combat aerial vehicles. This combined military technology is called lethal autonomous weapon system (LAWS), which is extremely dangerous for nuclear escalation. Despite the efforts of NPT and bilateral strategic dialogues between Russia-US and China-US, nuclear deterrence faces the danger of nuclear escalation. While there is a need of new arms control treaties and strengthening the previously existing treaties like New Strategic Arms Reduction Treaty between Russia and US. In this digital age, there are nonhuman agents in the field of deterrence which are completely different from classical deterrence. The fear of an accidental nuclear war will encompass a combination of human-machine interaction failure, organizational, and procedural factors. Moreover, decision makers underestimate the importance and frequency of random accidents in those interactions.

The problem lies in the lack of human agents to articulate caution of warning signals or intelligence assessments.

AI enhanced systems with increased level of sophistication and compressed decision making will contain future mishaps and de-escalating the situation which is very sensitive. Today, any cyberattack could infiltrate a nuclear weapon system, threaten its command-and-control structures, and could gain access to the communication channels. While new cyber defence technology and AI-augmented cyber tools could exploit adversary plans to penetrate cyber defence causing instability. AI systems can make undesirable and unintended signals by using algorithms complicating a balance to escalate the situation. Analyzing the new strategic technologies, transformative AI systems, and augmentation, there is a dire need for human rationality, signaling, and perceptions. At least, AI should not be used in the decision-making process and command, and control structures of nuclear weapons. The problem lies in the lack of human agents to articulate caution of warning signals or intelligence assessments. In the third nuclear age, the signaling of a nuclear crisis has become extremely complex and sensitive.

The prominent risk which is related to AI is human decision making. Decision making involves a set of intelligence gathering, communication, and analysis. The main component is the time to make decisions for an upcoming nuclear attack. The longer it takes to warn, the more danger there will be. So, AI-enabled technologies are used for speeding up the process of detection and communication. The problem lies in machine-made errors for detection, early warning system, and cyber-attacks on the information system. Sometimes, the decisions made based on rationality are not perceived by the machines, like Petrov used during the Cubin missile crises. AI system works on the means of interpretability and thus lags far behind the state system. Due to assistance of AI in nuclear decision making, there is also a risk of artificial acceleration.

There is utmost need of Confidence Building Measures (CBMs) for South Asia stability in the domain of AI

In India-Pakistan case, rise of AI with effective command and control of nuclear weapons, there is a fog of war in the region undermining the strategic stability with both countries having dual capabilities. India’s nuclear submarines are manipulated by high levels of generative AI. Though India and Pakistan’s land and air-based second-strike capabilities have a neutral effect on nuclear deterrence. Both countries have a reliable and harder command and control mechanism which is separate from other networks. Now both countries are approaching AI products for national security but abiding it to assist decision making. There is utmost need of Confidence Building Measures (CBMs) for South Asia stability in the domain of AI. India and Pakistan should establish AI mitigation and reporting censors to counter escalatory actions. Regulatory measures and AI security for inter-state strategic communication are further actions to be taken by both states.

However, some AI technologies are also reliable and allow states to restabilize their relationships. Machine learning is used for improvements in early warning detection capabilities. It would provide a greater window for nuclear forces which will make it harder for the enemy to identify and target it. Moreover, these technologies can also be used for detection through pattern recognition. Therefore, AI and machine learning can also be used for arms control verification measures. There is a strong coalition between arms control and strategic stability especially in South Asian case. These types of expert control arrangements are necessary for increasing stability in the region through AI techniques.

Disclaimer: The opinions expressed in this article are solely those of the author. They do not represent the views, beliefs, or policies of the Stratheia.

Author