Features

November 1, 2010  

Appropriate autonomy

In “Robot Revolution,” [July/Aug AFJ] Col. Christopher Carlile and Lt. Col. Glenn Rizzi write, “The bad autonomy is the type where robotics begins to make life-or-death decisions. … How much self-sufficiency or sovereignty are Americans willing to provide these unmanned, highly mobile killers on the battlefield? … very little.”

Do the authors include the automated Patriot and Counter-Rocket, Artillery, Mortar (C-RAM) defense systems in these discussions? Or active protection systems for vehicles? The U.S. has had air defense systems (Patriot and the ship-based Aegis) with fully autonomous modes for decades, suggesting that the author’s attempts to draw clear, stark lines on the use of lethal force are overly simplistic. At the same time, the history of these systems suggests that existing processes to ensure adequate controls and safeguards on autonomous systems are insufficient. Both the Patriot and Aegis have been involved in inappropriate engagements (fratricide and civilian casualties) where a failure to understand the capabilities and limitations of the automation may have played a role. These incidents suggest that the interplay between humans and automation in fast-moving real-world situations, often with ambiguous or confusing information, is just as important as the autonomy itself.

It is unrealistic to think that the U.S. will not deploy autonomous combat weapons for some defensive situations. Fully autonomous modes exist on the Patriot and Aegis because a missile salvo attack can exceed the capacity of humans to respond quickly enough in self-defense. As the sophistication of adversary weapons increases, and as they proliferate to smaller actors including nonstate groups, the range of situations where autonomy may be required for self-defense may expand. U.S. research into active protection systems like Quick Kill already signals movement in this direction.

The possibility of unintentional collateral damage has been a significant concern for active protection systems, however, and the military will need to think critically about how to ensure against fratricide, civilian casualties and unintentional escalation for all autonomous systems. Systems must be built to be fail-safe. The wartime environment in which military systems operate is fraught with friction, adversary innovation, civilians on the battlefield, and confusing or misleading information.

Where autonomy is appropriate and how to build fail-safe systems will be a significant challenge for the U.S. defense community. This will be aggravated by the fact that adversaries may be more accepting of the risks of autonomy. The U.S. must balance the need to control autonomous systems against the need for weapons effective enough to prevail in an engagement. Offensive and defensive systems may require different rules. Distinguishing between weapons that have human-in-the-loop targeting but automated execution (e.g. Tomahawk) and those that have fully autonomous target selection (e.g. Harpy, LOCAAS) may also be significant. The U.S. may decide that full autonomy is necessary and acceptable for defensive systems, but that offensive systems must require human authorization for each target before engagement. As computer technology evolves, the complexity of the decision-making process is yet another factor. As complexity increases, it becomes increasingly difficult to predict ahead of time what types of malfunctions or vulnerabilities may emerge in real-world environments.

To suggest that the Army will not delegate lethal engagement decisions to an autonomous system is not just overly simplistic; it is factually incorrect. The military has already done so for several decades. The authors are right to be skeptical of autonomy in combat decision-making. Deciding when autonomy is appropriate, and how adequate safeguards can be ensured, will be a significant challenge in the future.

— Paul Scharre, former Army Ranger, Washington, D.C.