In nine different stories published in the 1940s, the inimitable science fiction author Isaac Asimov explored the ethical implications of technology by way of imagining a world increasingly inhabited by humanoid autonomous systems. All these stories chart different threads of a singular narrative where a reporter interviews a ‘robopsychologist’. All these stories somehow converge on the issue of ethical programming, with Asimov’s one or more laws of robotics at the center. In one of the stories set around 2019, the robot refuses to follow human order but still does the ‘right’ thing. In another one spun in 2021, a robot is left with a programming error and finds himself in an infinite loop of withholding versus yielding the information.
Kenneth Payne’s recent book exploring the complex interplay of AI and military strategic thinking is a clever wordplay on Asimov’s anthology I, Robot. Payne, being a political psychologist, has been deeply interested in studying the evolution of strategic thinking within the context of warfare. I, Warbot echoes Arthur C. Clarke’s classic maxim that all sufficiently advanced technologies have always been indistinguishable from magic. In this regard, while AI is no exception, warfare theorists as well as AI practitioners must try to mark brittle skills where AI might end up being worse than a toddler.
But while having an interesting theoretical dimension, the problem is not just theoretical. Autonomous weapons systems are a reality and algorithmically driven disruptive technologies are extending boundaries of control in subtle ways. In this context, while we have always been stating unambiguously what we expect out of autonomous weapons, isn’t it time to reflect upon ways where they might behave in unexpected ways?
This takes us back to Asimov’s fictional world where a robot must be designed to follow three laws. One, it may not injure a human being through action or inaction. Two, it must obey human orders unless in conflict with the first law. Three, it must protect itself unless in conflict with both the first and second laws.
However, while these laws might fancy a fictional web of stories, would they provide a rational viewpoint of guiding actual war machines which are built upon layers and layers of arguably inexplicable autonomous computing? Violence, after all, is a distinguishing feature of war and if future Warbots – the lethal robotic machines – are being designed and programmed to kill accurately and relentlessly, how can they incorporate an essential constraint of inefficiency without creating irresolvable paradoxes?
To attempt an answer, Payne offers three laws of Warbots as an opening gambit. Firstly, a warbot should only kill those the owner wants it to and exercise violence in a humanistic way. Secondly, it must understand the owner’s intentions and exercise creativity. Thirdly, it should protect the humans on the owner’s side at all costs including the sacrifice of their life — at the same time, this protection should not be at the expense of the mission.
This gambit is no less than a semantic master stroke. Among other things, it immediately implies that AI portrayed in film and art is human-like while not being human. The media too cannot break free of science-fictional templates. These media indulgences tell us more about ourselves rather than robots. These are unrealistic expectations of AI which are merely on-screen manipulations and fall quite short of the domain of actual possibilities in autonomous computing. Launching from this critical opening gambit, the rest of the book aims to chart this domain of possibilities.
Since Payne is primarily a political psychologist, a recurring thread in the book is that the minds of the Warbots – the neural connectivity so to speak – will be quite different from the humans. As AI practitioners, we may immediately refer to the fact how that state-of-the-art reinforcement learning algorithms are diverging from classical neural networks. Military tacticians, on the other hand, may refer to the psychological insights of strategic theorists. Carl von Clausewitz, for instance, argued that war is an intense emotional business where ‘passionate hatred’ motivates the belligerents. The commander is an idealized ‘genius’ who makes the right decisions with limited information. While conceding with humility, theorists like Clausewitz felt no qualms in accepting that they were in dark about the complexities of the human mind. Nevertheless, they could state one fact emphatically: the human brain doesn’t work like a machine.
Thus any decision-making technology, if transformed into artificially intelligent warfare, will yield unexpected results. Historical blueprints for creating Warbots are nonexistent. It’s all about working backward from what we want them to achieve. The question boils down to this: what kind of weapons are required by the armed forces? More specifically, what kind of drivers shape these requirements in the first place?
Reducing the first question to functional context disregards the most important paradigm which is cultural. This includes societal attitudes to war and how different strategic cultures rationalize violence as a means to an end. The other question relates to design, i.e. the engineering philosophy as well as the craft. Would we be able to say that Warbots are clever machines? Of course, these are far ahead of humans in terms of computing power, optimized decision-making agency in an extremely constrained environment, and agility of convergence, but would they be considered as ‘clever’ and ‘intuitively informed’ as humans? Isn’t it possible that autonomous problem-solving is being misunderstood here with intelligence?
Payne argues at length about how cyber security is being increasingly entangled with AI. To mitigate risks, organizations like DARPA regularly launch grand challenges for AI to automatically find vulnerabilities in code. While these challenges stop here due to ethical concerns, what stops attempts at the next obvious tactical maneuver which is turning defense into an attack by hacking the hacker? These competitions provide insights into new conundrums related to problems of attribution within the context of cyber warfare. If we cannot possibly know who has attacked us, how can we possibly launch a counter-offense without inviting chaos?
The situations become further complex when attempts like DeepMind increasingly imitate Asimov’s fictional universe, the terrain where Warbots design other Warbots. The new deep learning algorithms dive into particular environmental constraints and look for features serving as foundations for other reinforcement learning algorithms. This is a meta-learning frontier, where an autonomous agent tries to learn what other autonomous learners need to learn.
This is no surprise that the goal of DARPA’s AI Next program is to build autonomous computers that can reason and think in context and function more as colleagues than as tools. The subtle distinction between exploratory creativity and a transformational or collaborative one is hard to miss. Whether it is AlphaZero beating Gary Kasparov, the AlphaGo beating Lee Sedol, or the AI engine winning multi-player no-limit poker game, all are examples of learning by exploratory creativity at its best. Transformation creativity, however, is a true genius. Machines, in this sense, are excellent in ‘thinking’ but can they truly ‘create’? It is only possible if they can ‘understand’. Payne’s book not only raises a key strategic concern, but it is also timely as well, and equally likely to indulge both military professionals as well as practicing scientists.
As engineers, we are well familiar with the ways control systems fail and overshoot the bounds of stability. We are also aware of strategic analogs of unstable systems, for instance, Clausewitz’s constraints such as ‘fog-of-war’ and ‘friction’ leading to failure. Can we pursue research in directions where both concerns are combined to achieve semi-autonomous, artificially intelligent agents collaborating with humans and bound by our specific moral constraints? Only time will answer this question since boundaries between fact and fiction are already blurred. The real challenge lies in the preservation of the art of war while augmenting the science of it.