
The book army of none was written by Paul Scherrer, a Pentagon defence expert and former U.S. Army Ranger explores what it would mean to give machines authority over the ultimate decision of life or death. ‘Army of None’ takes us into the world of futuristic weapons technology, lethal autonomous weapon systems, or killer robots, driven by artificial intelligence and able to wage war without the need for human command.
The author asks and tries to answer the following questions: Given the rapid advancement in artificial intelligence (AI) technology, should robots be allowed to make life-or-death decisions? To what degree should humans be involved in the decision-making process? Should we, or could we, ban autonomous weapons? What happens when a Predator drone has as much autonomy as a Tesla fully automated car? Or when a weapon that can hunt its targets is hacked?. Paul Scharre draws on deep research and firsthand experience to explore how these next-generation weapons are changing warfare.
Scharre’s far-ranging investigation examines the upcoming of autonomous weapons, the movement to ban them. The book explains legal and ethical issues relating to the laws of armed combat in clear and simple terms and presents a range of expert opinions from leading thinkers in the field whom Scharre has interviewed. He spotlights artificial intelligence in military technology, spanning decades of innovation from German noise-seeking Wren torpedoes in World War II–antecedents of today’s homing missiles to autonomous cyber weapons, submarine-hunting robot ships, and robot tank armies. Through interviews with defence experts, ethicists, psychologists, and activists, Scharre surveys what challenges might face “centaur warfighters” on future battlefields, which will combine human and machine cognition.
More than thirty countries already have defensive autonomous weapons that operate under human supervision. Around the globe, militaries are racing to build robotic weapons with increasing autonomy. The ethical questions within this book grow more pressing each day. To what extent should such technologies be advanced? And if responsible democracies ban them, would that stop rogue regimes from taking advantage? At the forefront of a game-changing debate, Army of None engages military history, global policy, and cutting-edge science to argue that we must embrace technology where it can make war more precise and humane, but without surrendering human judgment. When the choice is life or death, there is no replacement for the decision making of the human brain.
After defining autonomy and giving a brief background on its past and current use in warfighting, Scharre spends the majority of the book seeking answers to whether or not we should entrust life-or-death decisions to machines and what degree. He does this by interviewing a diverse selection of industry experts and offering their views and perceived courses of action, allowing readers to form their own opinions on the subject. Those interviewed ranges from former US deputy secretary of defence Bob Work, to program managers at the Defense Advanced Research Projects Agency (DARPA), to private companies developing commercial applications of AI.
In addressing the moral dimensions, Scharre draws on his extensive experience as a U.S. Army Ranger in Iraq and Afghanistan. In one of his most arresting passages, he describes an incident in which he and some fellow soldiers, while positioned atop a mountain ridge on the Afghan-Pakistani border, observed a girl perhaps five or six years old, herding goats nearby. In the stillness of the mountain air, they could hear her talking on radio—a clear indication she was scouting their position for a Taliban force hiding nearby. Under the rules of war, Scharre explains, the young girl was an enemy combatant, putting his unit at risk, and so could have been shot. Yet, he chose not to, acting out of an innate moral impulse. “My fellow soldiers and I knew killing her would be morally wrong. We didn’t even discuss it.” Could machines ever be trained to make this distinction? Scharre is highly doubtful. The book contains many situations like this where there are moral and ethical dilemmas are included and what would the machines will do in situations like that.
Most of Scharre’s discussion concerns the potential use of lethal autonomous weapons systems on the conventional battlefield, with robot tanks and planes fighting alongside human-occupied combat systems. His principal concern in these settings is that the robots will behave like rogue soldiers, failing to distinguish between civilians and combatants in heavily congested urban battlegrounds or even firing on friendly forces, mistaking them for the enemy. Scharre is also aware of the danger that greater autonomy will further boost the speed of future engagements and reduce human oversight of the fighting, possibly increasing the danger of unintended escalation, including nuclear escalation.
The book’s climax is a thoughtful analysis of international arms control treaties throughout history, as part of an examination of steps that could be put in place to restrain the spread of autonomous weapon systems. Scharre argues that arms control treaties work well when they apply to weapons with dreadful effects, which have limited military usage and are possessed by a relatively small number of states. Unfortunately, he concludes, an all-out global ban on autonomous weapons is unlikely to work because of their perceived military value and their development by a wide range of militarized nations. He highlights difficulties in defining autonomous weapons in a way that would discriminate them from existing highly automated systems such as Brimstone and Aegis, which possessing states would be unlikely to want to surrender, and in ensuring that any international ban is not disregarded during wartime.
However, ‘Army of None’ proposes several alternatives to a ban treaty, including a ban on antipersonnel autonomous weapon systems, which might work because such systems would have a low military utility but a high potential for causing harm; a non-legally binding code of conduct to help establish norms for the control of autonomous weapons; and the establishment of a general principle that human judgment must always be involved in war and that there must always be a positive human involvement in lethal force decisions. The book’s conclusion is a powerful call for restraint, and an appeal for states and society to urgently develop an understanding of which uses for autonomous systems are acceptable, and which go too far. Even though artificial intelligence is doing wonders in so many aspects, it is premature to try and hand over decision-making control to algorithms in situations where there is loss of life is involved.