This summer season I spent some time catching up on my ‘reading’ list and, specifically, on its not job-related part. You now, your brain deserves a vacation too 😉
“Army of None” by Paul Scharre was sitting in my Audible library for quite long as I tend to postpone long readings in favor of shorter ones. I bought the books a year ago or so after reading the Bill Gates's review on it. When I finished the reading, it proved to me once more that the most balanced opinion on a subject is usually on the intersection of knowledge areas: the author has his professional background both in military and technology. Apart from that, the book revealed lots of biases that even tech-nerd people like me have towards the application of artificial intelligence in weapons.
So, let’s go through some of my key takeaways from the book.
The Sci-Fi perception
Terminator vs Roomba
It would be unwise to underestimate the impact of public media and especially science-fiction movies on how the average person sees robots or computer systems with an AI label. Most of the opinions would include some kind of drama like robots that gone wilds and refused to follow human commands or some dread killing machines that, for an unknown reason, want to eradicate humanity.
On the contrary, people prefer to stay unaware of such simple fact that our modern life is already highly dependent on many small autonomous solutions: robocleaners clean our households, plane autopilots flight passengers across the oceans, medical appliances perform surgery no human doctor is capable of, financial software trade on a stock market, and the list continues. We got used to so many helpful things computers do for us that it becomes extremely hard to stay conscious about augmentations we rely on.
For example, when driving recently through the city I wondered, who is actually in charge – me or the car navigation system? Me, a driver, who just merely pays attention to what route to take and following the navigator’s instructions on where to turn, or the AI, which analyses the traffics, decides on the best route, and reroutes when the traffic conditions change. Who is a supplement to ‘whom’? Just a decade ago, when driving to a new place, I had to check it on a map, remember some landmarks on the route, and even simply stop the car and ask people for directions. Now, it’s as simple as opening Google Maps on your phone, typing an address, and just following the guidelines from the app. Woop, “you have arrived at your destination.”
While a Terminator remains fiction, we still worry more about highly intelligent human-like machines than about our obsessive dependency on the small rectangular devices we carry on in our pockets.
Partial autonomy
I’m not sure I understand. Siri(c)
When it comes to a discussion on what is a good application of AI and what is not, it usually turns out to be a talk about the level of autonomy and our understanding of AI’s capabilities. Building an algorithm that has some uncertainty and possibility to come up with an unprescribed solution for a computational problem is great from the scientific point of view. However, building a military system with the ability to ‘fire at will’ and even the slightest unpredictability is a completely different thing. Modern humans are not so different from their ancestors who lived centuries ago, and we tend to be afraid or avoid anything we don’t understand or cannot predict.
For the last fifty years, computer intelligence proved to be very helpful in many areas spreading from space exploration to regulating temperature in our houses. The common pattern for its successful application was the focus on a very specific task to automate. Deep Blue plays chess and does it much better than any human with few exceptions, shopping algorithms analyze user interaction and increase sales by providing suggestions, autonomous transportation drastically reduces incident rates. On average, a computer AI becomes a feasible option for replacing manual labor in relatively fixed and controlled environments that can be well defined by governing rules.
The simpler the model for AI to operate, the more likely we can create a successful algorithm that can outperform man. The more unpredictable the conditions and less defined the rules are, the more human is likely to succeed and machine to fails. Millenia of evolution made us very good in everything that related to our survival. For instance, computer image recognition is still far-far away from what a one-year-old baby can visually recognize with ease.
Abstract ‘thinking’ and moral judgment are also weak spots in applied computer intelligence. Those topics are directly related to the complexity of defining some abstracts or moral choices. As the author described from his experience, on the battlefield soldiers often face the choice between what is legal to do and what is moral. Where the machine would just follow the programmed rules, solders also relied on their moral sense and humanity. As M in the “Spectre” movie said: “A license to kill is also a license to not kill.”
Deus ex machina
Half-human, half-machine
In practice, it turned out that the most successful application of computer intelligence in the military fall in the following two major categories: processing a large amount of input data to be ready for decision making by operator or executing a high-level objective with the precision and speed no human can achieve.
In the former category, computer systems help humans make more reasonable decisions when an unfiltered stream of information from military units, radars, drones, satellites, etc., can easily overload our cognitive capacity. So, first, sophisticated algorithms try to identify, sort, range, and logically group incoming data. Then, they present the results in a meaningful form for consideration when making up your mind about what to do next. One of the examples of such systems is a flight radar that constantly and completely autonomously scans an area and notifies the personnel on duty on matches for specific conditions like an unapproved flight.
The latter is also based on data processing, but it helps from the execution point of view. In this case, an operator makes a general decision about intercepting or destroying specific targets, and the machine does the rest: locks on the target, aims, fires, confirms a kill, and possibly continues the sequence until the target is destroyed. What matters here is precision and execution speed. When a target moves so fast that no human can track it, autonomous weapons make a real difference.
The common trend for computer applications in the military is that a human should always stay in the loop. The operator should have control over the on/off switch: either to initiate an engagement or cancel it. In that scenario, the union of computer power for large data processing and the human ability for adaptation make a really powerful combo.
You shall not pass
No robots allowed
Speaking of regulation for autonomous weapons, it is naive to believe that some law or rule can protect us from fully autonomous lethal weaponry. Countless lessons from the history of war teach us that human beings tend to use any new technology or weapons that can provide them with a slightest advantage over an enemy. The cases with a chemical weapon, anti-personnel landmines, and blinding laser weapons in multiple conflicts are still making the news.
It is more likely that other factors can implicitly limit the defensive or offensive application of AI. For example, the balance of power, as with nuclear weapons, can restrain a party from using autonomous weapons of mass destruction when it understands that a counterparty can strike back with similarly devastating effects. The high probability of friendly fire or collateral damage can also turn the military away from such systems.
Apparently, programming the AI according to the three laws of robotics is a big challenge as their definition is too abstract and ambiguous. Moreover, how can we be sure that truly self-cautious computer intelligence will understand the rules the same way humans do and will agree to follow them? Homo Sapiens are born only with some basic instincts and no moral judgment at all. If babies are not brought up in society, they will be not so different from animals when grown up. So, educating an AI according to human values is something that we still need to explore.
“Army of None” can be an eye-opener for those who believe in conspiracy and Skynet-like intelligent machines. Generic AI that can compete with the cognitive skills of the average man or woman is out of the reach for modern computers and is no threat to humankind in the foreseeing future. On the contrary, a design flaw or a glitch in the relatively primitive automation algorithms can easily turn any machinery into a runaway gun. So, embedding a kill switch in any autonomous weaponry is a must.
I can definitely recommend this book to anyone as an introduction to modern warfare and the role of computers in military applications.
Member discussion: