Can we preserve the art of war?

4
1036
Robot dogs join US Air Force exercise giving a glimpse at the potential battlefield of the future. (source: CNN)

In nine different stories published in the 1940s, the inimitable science fiction author Isaac Asimov explored the ethical implications of technology by way of imagining a world increasingly inhabited by humanoid autonomous systems. All these stories chart different threads of a singular narrative where a reporter interviews a ‘robopsychologist’. All these stories somehow converge on the issue of ethical programming, with Asimov’s one or more laws of robotics at the center. In one of the stories set around 2019, the robot refuses to follow human order but still does the ‘right’ thing. In another one spun in 2021, a robot is left with a programming error and finds himself in an infinite loop of withholding versus yielding the information.

Kenneth Payne’s recent book exploring the complex interplay of AI and military strategic thinking is a clever wordplay on Asimov’s anthology I, Robot. Payne, being a political psychologist, has been deeply interested in studying the evolution of strategic thinking within the context of warfare. I, Warbot echoes Arthur C. Clarke’s classic maxim that all sufficiently advanced technologies have always been indistinguishable from magic. In this regard, while AI is no exception, warfare theorists as well as AI practitioners must try to mark brittle skills where AI might end up being worse than a toddler.

But while having an interesting theoretical dimension, the problem is not just theoretical. Autonomous weapons systems are a reality and algorithmically driven disruptive technologies are extending boundaries of control in subtle ways. In this context, while we have always been stating unambiguously what we expect out of autonomous weapons, isn’t it time to reflect upon ways where they might behave in unexpected ways?

This takes us back to Asimov’s fictional world where a robot must be designed to follow three laws. One, it may not injure a human being through action or inaction. Two, it must obey human orders unless in conflict with the first law. Three, it must protect itself unless in conflict with both the first and second laws.

However, while these laws might fancy a fictional web of stories, would they provide a rational viewpoint of guiding actual war machines which are built upon layers and layers of arguably inexplicable autonomous computing? Violence, after all, is a distinguishing feature of war and if future Warbots – the lethal robotic machines – are being designed and programmed to kill accurately and relentlessly, how can they incorporate an essential constraint of inefficiency without creating irresolvable paradoxes?

To attempt an answer, Payne offers three laws of Warbots as an opening gambit. Firstly, a warbot should only kill those the owner wants it to and exercise violence in a humanistic way. Secondly, it must understand the owner’s intentions and exercise creativity. Thirdly, it should protect the humans on the owner’s side at all costs including the sacrifice of their life — at the same time, this protection should not be at the expense of the mission.

This gambit is no less than a semantic master stroke. Among other things, it immediately implies that AI portrayed in film and art is human-like while not being human. The media too cannot break free of science-fictional templates. These media indulgences tell us more about ourselves rather than robots. These are unrealistic expectations of AI which are merely on-screen manipulations and fall quite short of the domain of actual possibilities in autonomous computing. Launching from this critical opening gambit, the rest of the book aims to chart this domain of possibilities.

Since Payne is primarily a political psychologist, a recurring thread in the book is that the minds of the Warbots – the neural connectivity so to speak – will be quite different from the humans. As AI practitioners, we may immediately refer to the fact how that state-of-the-art reinforcement learning algorithms are diverging from classical neural networks. Military tacticians, on the other hand, may refer to the psychological insights of strategic theorists. Carl von Clausewitz, for instance, argued that war is an intense emotional business where ‘passionate hatred’ motivates the belligerents. The commander is an idealized ‘genius’ who makes the right decisions with limited information. While conceding with humility, theorists like Clausewitz felt no qualms in accepting that they were in dark about the complexities of the human mind. Nevertheless, they could state one fact emphatically: the human brain doesn’t work like a machine.

Thus any decision-making technology, if transformed into artificially intelligent warfare, will yield unexpected results. Historical blueprints for creating Warbots are nonexistent. It’s all about working backward from what we want them to achieve. The question boils down to this: what kind of weapons are required by the armed forces? More specifically, what kind of drivers shape these requirements in the first place?

Reducing the first question to functional context disregards the most important paradigm which is cultural. This includes societal attitudes to war and how different strategic cultures rationalize violence as a means to an end. The other question relates to design, i.e. the engineering philosophy as well as the craft. Would we be able to say that Warbots are clever machines? Of course, these are far ahead of humans in terms of computing power, optimized decision-making agency in an extremely constrained environment, and agility of convergence, but would they be considered as ‘clever’ and ‘intuitively informed’ as humans? Isn’t it possible that autonomous problem-solving is being misunderstood here with intelligence?

Payne argues at length about how cyber security is being increasingly entangled with AI. To mitigate risks, organizations like DARPA regularly launch grand challenges for AI to automatically find vulnerabilities in code. While these challenges stop here due to ethical concerns, what stops attempts at the next obvious tactical maneuver which is turning defense into an attack by hacking the hacker? These competitions provide insights into new conundrums related to problems of attribution within the context of cyber warfare. If we cannot possibly know who has attacked us, how can we possibly launch a counter-offense without inviting chaos?

DARPA has officially announced last year that they are evaluating potential uses of jetpacks in the military. (photo credits: autorevolution)

The situations become further complex when attempts like DeepMind increasingly imitate Asimov’s fictional universe, the terrain where Warbots design other Warbots. The new deep learning algorithms dive into particular environmental constraints and look for features serving as foundations for other reinforcement learning algorithms. This is a meta-learning frontier, where an autonomous agent tries to learn what other autonomous learners need to learn.

This is no surprise that the goal of DARPA’s AI Next program is to build autonomous computers that can reason and think in context and function more as colleagues than as tools. The subtle distinction between exploratory creativity and a transformational or collaborative one is hard to miss. Whether it is AlphaZero beating Gary Kasparov, the AlphaGo beating Lee Sedol, or the AI engine winning multi-player no-limit poker game, all are examples of learning by exploratory creativity at its best. Transformation creativity, however, is a true genius. Machines, in this sense, are excellent in ‘thinking’ but can they truly ‘create’? It is only possible if they can ‘understand’. Payne’s book not only raises a key strategic concern, but it is also timely as well, and equally likely to indulge both military professionals as well as practicing scientists.

As engineers, we are well familiar with the ways control systems fail and overshoot the bounds of stability. We are also aware of strategic analogs of unstable systems, for instance, Clausewitz’s constraints such as ‘fog-of-war’ and ‘friction’ leading to failure. Can we pursue research in directions where both concerns are combined to achieve semi-autonomous, artificially intelligent agents collaborating with humans and bound by our specific moral constraints? Only time will answer this question since boundaries between fact and fiction are already blurred. The real challenge lies in the preservation of the art of war while augmenting the science of it.

Print Friendly, PDF & Email

4 COMMENTS

  1. It sounds interesting to me, but the piece of writing just brought me into an inferiority complex vis-à-vis the write-up, sensitising me to read more as it isn’t enough.

    Yeah, I do agree with some ideas being highlighted in the write-up, like how we just need to know the updated paradigm of the world. Just as IA is the most advanced paradigm shift in the science field, similarly, keeping in mind the rapid change of the world, we need to be side by side with the world. The outdated versions of humans and machines get rusty, likewise as per a quotation: “If someone wants to keep him/herself safe from Mullah’s Fitwa, the someone should make him/herself bigger Mullah.”

    Now, Artificial intelligence will certainly have a role in future military applications. It has many application areas where it will enhance productivity, reduce user workload, and operate more quickly than humans.

    Ongoing research will continue to improve its capability, explainability, and resilience. The military cannot ignore this technology. Even if we do not embrace it, certainly our opponents will, and we must be able to attack and defeat their AIs. However, we must resist the allure of this resurgent technology. Placing vulnerable AI systems in contested domains and making them responsible for critical decisions opens the opportunity for disastrous results. At this time, humans must remain responsible for key decisions.

    Given the high probability that our exposed AI systems will be attacked and the current lack of resilience in AI technology, the best areas to invest in military AI are those that operate in uncontested domains. Artificial-intelligence tools that are closely supervised by human experts or that have secure inputs and outputs can provide value to the military while alleviating concerns about vulnerabilities.

    Examples of such systems are medical-imaging diagnostic tools, maintenance-failure prediction applications, and fraud-detection programs. All of these can provide value to the military while limiting the risk from adversarial attacks, biased data, context misunderstanding, and more. These are not the super tools sponsored by the AI salesmen of the world but are the ones most likely to have success in the near term.

  2. A very intriguing article which kept me fully absorbed till very end. Its at an appropriate moment when US and EU are trying to regulate Artificial Intelligence. Taming the disruptive technologies would never have a global consensus, when superpowers are struggling for competitive edge to dominate others. Looking forward for such insightful articles on this website.

    The more artificial intelligence becomes an indispensable part of our lives, the more risks we face regarding our privacy. While some experts say AI should be regulated, others point at bigger problems behind its rise in popularity.*

    Artificial intelligence (AI) has seeped into every aspect of modern lives – from “intelligent” vacuum cleaners and self-driving cars to advanced methods for diagnosing diseases.

    The United States recently published a blueprint for an AI Bill of Rights. And Canada is also mulling similar legislation.*

    Gry Hasselbalch, a Danish academic who advises the EU on the controversial technology, argued that the West was also in danger of creating “totalitarian infrastructures”.

    Above text is an extract from following article recently published

    https://www.trtworld.com/life/why-are-the-us-and-eu-trying-to-regulate-artificial-intelligence-63307

  3. Good blog! I truly love how it is simple on my eyes and the data are well written. I’m wondering how I might be notified whenever a new post has been made. I have subscribed to your RSS which must do the trick! Have a great day!

LEAVE A REPLY

Please enter your comment!
Please enter your name here