In the crucible of 21st-century warfare, artificial intelligence (AI) has emerged as both a transformative force and an existential threat. The AI-controlled drones flying over Gaza skies, logistics predicting the battlefield in Ukraine, AI in the military is no longer a hypothetical future, it is the present.

AI promises precision but often delivers opacity.

However, as countries rush to develop hyper-intelligent and independently acting systems, it becomes apparent that we are creating weapons that we cannot always control, systems that we do not entirely comprehend, and a warfare environment that may be impossible to contain. It is not just a technological problem; this is a legal, ethical and strategic emergency and demands immediate worldwide response.

The use of AI in military activities is transforming war at an unparalleled rate. Israel’s Defense Forces (IDF) exemplify this shift, leveraging AI for real-time threat detection and rapid response in asymmetrical conflicts, as reported by defense analyst Anna Ahronheim in The Jerusalem Post (2024). On the same note, the United States Department of Defense has been spending billions of dollars on AI with recent contracts to Anthropic, OpenAI, xAI, and Google to develop capabilities in war-fighting and intelligence.

Interestingly, the $200 million transaction of xAI was not affected by the widespread public uproar over the controversial outputs of xAI chatbot Grok, which begs the question of who checks whether the vendors of AI are ethical. In Europe, the private sector is stepping in where policy lags. Spotify CEO Daniel Ek’s investment in Helsing, a German AI defense firm, underscores the growing influence of corporations in shaping warfare’s future (Financial Times, 2025).

Nonetheless, the ethical and legal frameworks are not keeping up with this technological sprint. AI promises precision but often delivers opacity. Autonomous Weapon Systems (AWS) and AI Decision Support Systems (AI-DSS) introduce profound accountability gaps. As legal scholar Julia Williams notes in Foreign Affairs (2025), when an AI misidentifies a target, who bears responsibility the programmer, the commander, or the algorithm itself? The International Committee of the Red Cross (ICRC), in its 2025 submission to the UN Secretary-General, warns that “black-box” systems where outcomes are neither predictable nor explainable risk violating International Humanitarian Law (IHL) by functioning as indiscriminate weapons.

Military AI does not have an exclusive international regime of control, unlike nuclear or chemical weapons.

One of the most devious risks is so-called automation bias, when operators trust AI outputs too much due to the stress that they are facing. The pattern of discriminatory targeting risks being carried over to labeled data, as the already problematic data about the AI-DSS bias, can be encoded into data as bias and then passed on to further bias decisions on detainments and attacks. With that, the ICRC warns that these systems can turn commanders into mere rubber stamps that give way to machine decisions in the name of promoting human accountability.

The Center for Security and Emerging Technology (CSET) report, AI for Military Decision-Making (2024), highlights additional risks: data poisoning, misclassification, and cascading algorithmic errors. These weaknesses are further exacerbated by a geopolitical AI arms race where the US and China are spurring the swift militarization, and smaller countries are further confronted with an acute binary choice incorporate AI or be strategically obsolete.

The absence of robust governance exacerbates these dangers. Military AI does not have an exclusive international regime of control, unlike nuclear or chemical weapons. The ICRC’s request for global prohibitions of AWS without adequate human control is a very critical starting point, although it has encountered opposition in a disunified international system. As Russian capabilities expand and the EU’s Rearm Europe plan falters on AI strategy (Defense News, 2025), the gap between technological advancement and regulatory oversight widens.

Firms such as xAI, OpenAI, and Helsing are no longer vendors, they are geopolitical actors.

The corrective action is to reinstate the human dominion over life-and-death decisions. This does not imply rejection of innovation, but rather it is a matter of law and ethics. Such military AI systems need to be based on the so-called human-in-the-loop concept, where AI systems can complement but not substitute human decision-making. Policymakers and commanders should also remain accountable, especially in dense operations such as urban warfare, since any errors or judgment can prove to be disastrous.

To achieve this, three global shifts are essential:

  1. The UN and regional institutions need to develop treaties that outlaw the use of the black-box AWS and provide the implementation of IHL. Such instruments are supposed to cover AI-related risks, including automation bias and discrimination based on data, and supplement the existing regulations.
  2. Similar to the case of nuclear non-proliferation treaties, military AI requires a global system. The use, export controls, human control, and ethics of data should be regulated by this regime in a way that makes sure that AI benefits strategic stability and does not hinder it.
  3. Firms such as xAI, OpenAI and Helsing are no longer vendors they are geopolitical actors. Their technologies should undergo compulsory ethical checks, bias trials and openness particularly where they are used in warzones.

In the form in which it is taking place, AI is also changing the way wars are fought, but it is the why and at what cost that must not be defined by it. The introduction of AI into military systems requires careful testing, design visibility, and legal analysis based on the scenario. Biased audits, effective training procedures, and backstops must be the norm of the business. With no such measures, the world will face an even faster, more deadly, and more deprived of human control fight against algorithms.

Unless we rein in military AI, we can lose more than just unjust wars but also our capacity to stop them.

It is not the system that can be expected to conform to the law but the human being using it reiterates the ICRC. Unless we rein in military AI, we can lose more than just unjust wars but also our capacity to stop them. The war arena of the future should be a place of people and not of the computerized world, where ethics does not become a victim of the computerized war formula. The mist of war is fogging heavily, only intentional, moral deeds will get through the path.

Disclaimer: The opinions expressed in this article are solely those of the author. They do not represent the views, beliefs, or policies of the Stratheia.

Author

  • Amina Munir

    The author is a Research Associate at the Maritime Centre of Excellence (MCE), Pakistan Navy War College (PNWC), with double MPhil degrees in South Asian Studies and Pakistan Studies.

    View all posts