The world of warfare is now exploring autonomous weapons, a technology that navigates through Artificial Intelligence (AI) and machine learning (ML). The rapid advancement of weapons controlled by machine learning (ML) and artificial intelligence (AI) is termed the ‘Intelligentization of war’ by Chinese strategists. With major players acquiring this technology, the globe might face a shift in its strategic stability.

Autonomous weapons do not require human intervention and are targeted by sensors and software. Through ML, a weapon creates its model of a task to complete and is guided accordingly. This attack will be hard for humans to predict or control, thereby making accountability difficult, resulting in erosion of the international law against the use of force. Currently, states are interested in acquiring the Lethal Autonomous Weapon System (LAWS).

AI-driven weaponry will swiftly cause a security dilemma and escalate into an arms race.

To illustrate, India is rapidly adopting AI-led technologies, which can potentially erode strategic stability in South Asia. States involved in major conflicts generally prefer to adopt modern warfare technologies for the effective upgradation of their regular armed force against their potential rivals. Resultantly, both India and Pakistan are determined to expand their defensive capabilities beyond the existing ones while increasing their reliance on emerging warfare technologies such as autonomous weapons.

It is widely perceived by scholars and policymakers that AI-driven weaponry will swiftly cause a security dilemma and escalate into an arms race by shifting the tactical offense-defense balance. If this becomes a case between major global players such as India and Pakistan, the situation could result in strategic instability, created by mutual uncertainty over the balance of power. However, a strong justification for this pursuit of technological superiority is to enhance their deterrence. For instance, if India acquires such technology, Pakistan too would follow to uphold the balance. It traps states in what Richard Danzig called a “Technology Roulette”.

AI-driven weapons come with certain risks due to their inherent complexity. For instance, they may be vulnerable to hacking and code errors. Which may cause unpredictable accidents… This implication would lead to a chaotic situation where states might lose control of what they have created. Therefore, this technological aspiration requires more attention to attributes that can mitigate the consequences of failure and facilitate resilient recovery of such weapons. This requires serious deliberation at the global level.

The idea of “Arms control for AI” remains in its infancy, and international law remains very much in flux. International efforts made in regard to war and weapons have four rationales: ethics, legality, stability, and safety. The military AI has raised concerns on all four grounds. The “killer Robots” should alone be banned on ethical and legal grounds. Senior policy adviser at the International Committee of the Red Cross (ICRC), Neil Davison, believes that “AWS are an immediate cause of humanitarian concern and demand an urgent, international political response”. Subsequently, non-proliferation of autonomous and lethal military weapons is of the hour.

With global powers, along with other countries trying to acquire such technology, the AI arms race will soon be at its peak. Subsequently, to reduce the tension, major powers must undergo difficult compromises to forgo AI militarization in order to achieve mutual security. So far, 30 countries have declared support for the treaty banning LAWS. Since 2018, United Nations Secretary-General António Guterres has continued to say that LAWS are politically unacceptable and morally repugnant and has called for their prohibition under international law.

AWS is an immediate cause of humanitarian concern and demands an urgent, international political response.

Nations’ perceived benefits through the acquisition of LAWS may become a challenge in the path of AI arms control efforts. Such armaments threaten global stability as they increase the probability of major conflicts between rivals. Also, the international community should be concerned about rogue states or terrorist groups acquiring this technological shift. If this happens, world peace will come down to ashes. Concerning International law and norms, they speak very little regarding the AI militarization; in any case, norms and laws are not enough to mitigate the threat posed by such weapons.

In this regard, ICRC recommends that “states should adopt new legally binding rules on autonomous weapons, that will help prevent serious risks of harm to civilians and address ethical concerns, while offering the benefit of legal certainty and stability”. It suggests three grounds on which autonomous weapons could be prohibited: First, unpredictable autonomous weapons should be prohibited. Second, autonomous weapons that are designed and used to apply force against people directly should be prohibited. Third, there needs to be strict restrictions on the design and use of all other autonomous weapons to mitigate the risks mentioned above.

Unpredictable autonomous weapons should be prohibited.

In contrast, the advocates of AI militarization debate that AI militarization can offer tools for peaceful resolution and global stability. They justify that AI can prevent war through data analysis and detect early signs of conflict. Thereby reducing the chances of confrontation or attack, creating stability, and avoiding conflict. Furthermore, they argue that the use of AI can minimize the harm through its precision targeting, hence optimizing collateral damage. With a concerted effort, AI can help prevent war and limit its impact when it does occur, to the collective benefit of humanity. With this rationale prevailing among the proponents of AI weapons, disarmament becomes a hard ordeal.

Disarming the AI-led technologies would require significant efforts. States need to focus more on negotiations. In this globalized world, “Track II” diplomacy can play a major role in controlling the spread of AI militarization through shaping public opinion and subsequent pressure. However, in a realist’s world, the state’s primary focus is its survival and security. In the complex web of militarization and uncertain relations, states would want to focus more on enhancing their power rather than reducing it.

Disarming the future would not be easy, as all the states are now competing for their survival on all fronts.

The current global scenario with the Ukraine-Russia war, US-China trade war, Indo-Pak conflict, and US-Iran nuclear negotiations, it hard for the world to agree on disarmament. All the flashpoints of the world are ‘flashing’ right now, and disarming the future would not be easy, as all the states are now competing for their survival on all fronts. However, there must be certain laws to prohibit the large-scale damage of autonomous weapons. Humanity must be preserved in warfare.

Disclaimer: The opinions expressed in this article are solely those of the author. They do not represent the views, beliefs, or policies of the Stratheia.

Author

  • Nomeen Kassi

    The Author is a Research Assistant at Balochistan Think Tank Network (BTTN) Quetta. She is also MS-IR scholar with keen interest in India- Pakistan relations.

    View all posts