Notes on “Army of None” by Paul Scharre (Book Review)

Notes on “Army of None” by Paul Scharre (Book Review)

This summer season, I spent some time catching up on my ‘reading’ list and, specifically, on its not job-related part. You know, your brain deserves a vacation, too 😉

Army of None” by Paul Scharre was sitting in my Audible library for quite a long as I tend to postpone long readings in favor of shorter ones. I bought the books a year ago or so after reading Bill Gates's review of it. When I finished the reading, it proved to me once more that the most balanced opinion on a subject is usually on the intersection of knowledge areas: the author has his professional background both in military and technology. Apart from that, the book revealed lots of biases that even tech-nerd people like me have towards the application of artificial intelligence in weapons.

So, let’s go through some of my key takeaways from the book.

The Sci-Fi perception

Terminator vs Roomba

It would be unwise to underestimate the impact of public media, especially science-fiction movies, on how the average person sees robots or computer systems with an AI label. Most opinions would include some kind of drama, like robots that have gone wild and refused to follow human commands or dread-killing machines that, for an unknown reason, want to eradicate humanity.

On the contrary, people prefer to stay unaware of such simple fact that our modern life is already highly dependent on many small autonomous solutions: robot cleaners clean our households, plane autopilots flight passengers across the oceans, medical appliances perform surgery no human doctor is capable of, financial software trade on a stock market, and the list continues. We got used to so many helpful things computers do for us that it becomes extremely hard to stay conscious about the augmentations we rely on.

For example, when driving recently through the city, I wondered, who is actually in charge – me or the car navigation system? I am a driver who just merely pays attention to what route to take and follows the navigator’s instructions on where to turn, or the AI, which analyses the traffic, decides on the best route, and reroutes when the traffic conditions change. Who is a supplement to ‘whom’? Just a decade ago, when driving to a new place, I had to check it on a map, remember some landmarks on the route, and even simply stop the car and ask people for directions. Now, it’s as simple as opening Google Maps on your phone, typing an address, and just following the guidelines from the app. Woop, “You have arrived at your destination.”

While a Terminator remains fiction, we still worry more about highly intelligent human-like machines than about our obsessive dependency on the small rectangular devices we carry in our pockets.

Partial autonomy

I’m not sure I understand. Siri(c)

When it comes to a discussion on what is a good application of AI and what is not, it usually turns out to be a talk about the level of autonomy and our understanding of AI’s capabilities. Building an algorithm with some uncertainty and the possibility of coming up with an unprescribed solution for a computational problem is great from a scientific point of view. However, building a military system with the ability to ‘fire at will’ and even the slightest unpredictability is a completely different thing. Modern humans are not so different from their ancestors who lived centuries ago, and we tend to be afraid or avoid anything we don’t understand or cannot predict.

For the last fifty years, computer intelligence has proved to be very helpful in many areas, from space exploration to regulating temperature in our houses. The common pattern for its successful application was the focus on a very specific task to automate. Deep Blue plays chess and does it much better than any human with few exceptions, shopping algorithms analyze user interaction and increase sales by providing suggestions, autonomous transportation drastically reduces incident rates. On average, a computer AI becomes a feasible option for replacing manual labor in relatively fixed and controlled environments that can be well-defined by governing rules.

The simpler the model for AI to operate, the more likely we are to create a successful algorithm that can outperform man. The more unpredictable the conditions and the less defined the rules, the more humans are likely to succeed and machines to fail. Millennia of evolution has made us very good at everything related to survival. For instance, computer image recognition is still far away from what a one-year-old baby can visually recognize with ease.

Abstract ‘thinking’ and moral judgment are also weak spots in applied computer intelligence. Those topics are directly related to the complexity of defining some abstracts or moral choices. As the author described from his experience, on the battlefield, soldiers often face the choice between what is legal to do and what is moral. Where the machine would just follow the programmed rules, soldiers also relied on their moral sense and humanity. As M in the “Spectre” movie said: “A license to kill is also a license to not kill.”

Deus ex machina

Half-human, half-machine

In practice, the most successful applications of computer intelligence in the military fall into the following two major categories: processing a large amount of input data to be ready for operator decision-making or executing a high-level objective with the precision and speed no human can achieve.

In the former category, computer systems help humans make more reasonable decisions when an unfiltered stream of information from military units, radars, drones, satellites, etc., can easily overload our cognitive capacity. So, first, sophisticated algorithms try to identify, sort, range, and logically group incoming data. Then, they present the results in a meaningful form for consideration when making up your mind about what to do next. One example of such a system is a flight radar that constantly and completely autonomously scans an area and notifies the personnel on duty on matches for specific conditions like an unapproved flight.

The latter is also based on data processing, but it helps from the execution point of view. In this case, an operator makes a general decision about intercepting or destroying specific targets, and the machine does the rest: locks on the target, aims, fires, confirms a kill and possibly continues the sequence until the target is destroyed. What matters here is precision and execution speed. When a target moves so fast that no human can track it, autonomous weapons make a real difference.

The common trend for computer applications in the military is that a human should always stay in the loop. The operator should have control over the on/off switch: either to initiate an engagement or cancel it. In that scenario, the union of computer power for large data processing and the human ability for adaptation make a really powerful combo.

You shall not pass

No robots allowed

Speaking of regulation for autonomous weapons, it is naive to believe that some law or rule can protect us from fully autonomous lethal weaponry. Countless lessons from the history of war teach us that human beings tend to use any new technology or weapons that can provide them with the slightest advantage over an enemy. The cases with chemical weapons, anti-personnel landmines, and blinding laser weapons in multiple conflicts are still making the news.

Other factors can more likely implicitly limit the defensive or offensive application of AI. For example, the balance of power, as with nuclear weapons, can restrain a party from using autonomous weapons of mass destruction when it understands that a counterparty can strike back with similarly devastating effects. The high probability of friendly fire or collateral damage can also turn the military away from such systems.

Apparently, programming the AI according to the three laws of robotics is a big challenge as their definition is too abstract and ambiguous. Moreover, how can we be sure that truly self-cautious computer intelligence will understand the rules the same way humans do and will agree to follow them? Homo Sapiens are born only with some basic instincts and no moral judgment at all. If babies are not brought up in society, they will not be so different from animals when they grow up. So, educating an AI according to human values is something that we still need to explore.

Army of None” can be an eye-opener for those who believe in conspiracy and Skynet-like intelligent machines. Generic AI that can compete with the cognitive skills of the average man or woman is out of the reach of modern computers and is no threat to humankind in the foreseen future. On the contrary, a design flaw or a glitch in the relatively primitive automation algorithms can easily turn any machinery into a runaway gun. So, embedding a kill switch in any autonomous weaponry is a must.

I can definitely recommend this book to anyone as an introduction to modern warfare and the role of computers in military applications.