Why ‘unmanned systems’ don’t shrink manpower needs
The promise of robots, intelligent machines and other kinds of autonomous systems has often been described as replacing humans in various kinds of jobs — a promise that takes on special luster as budgeteers try to control personnel costs.
“By the end of the century, there will be virtually no humans on the battlefield,” Globalsecurity.org director John Pike has predicted. “Robots do what you tell them, and they don’t have to be trained.”
Yet the military’s growing body of experience shows that autonomous systems don’t actually solve any given problem, but merely change its nature. It’s called the autonomy paradox: The very systems designed to reduce the need for human operators require more manpower to support them.
Consider unmanned aircraft — which carry cameras aloft without a crew, yet require multiple shifts of operators, maintainers and intelligence analysts on the ground to extract useful data — and it becomes clear that many researchers and policymakers have been asking the wrong question. It is not “what can robots do without us?” but “what can robots do with us?”
Answering that question — and embracing a new era where “joint warfare” means human-robot teams — requires a better understanding of autonomy and a better effort to design for human-machine interdependence.
DEFINING TERMS
When President Kennedy said we’d put a man on the moon, everyone understood the “what,” even if they disagreed on the “how.” In the world of autonomy, there is still much debate on both counts.
Part of the problem is the lack of a common taxonomy. Many, for example, confuse the terms “automated” and “autonomous.”
Automated processes are now common, particularly in manufacturing, and involve execution and control of a narrowly defined set of tasks without human intervention. Automation is used when the parameters are well-defined and the environment is highly structured. One example is the Air Force’s air-launched cruise missiles. First deployed in 1982, they are launched great distances from their target, fly pre-programmed flight paths, avoid air defenses and then strike with high accuracy — a highly programmed, closed-loop event.
In contrast, autonomous systems can perform tasks in an unstructured environment. Such a system is marked by two attributes: self-sufficiency — the ability to take care of itself — and self-directedness — the ability to act without outside control.
When you hear that in Iraq and Afghanistan, nearly 20,000 robots and remotely operated vehicles have deployed in support of U.S. troops, it may be hard to comprehend just how far away the goal of autonomy really is. The vast majority of these robots are still controlled by human handlers, and those that aren’t lack even the most basic sense of context. Consider that at RoboCup, the premiere international competition for humanoid robots, the current challenge is something as simple as having the robot consistently turn, spot, approach and “kick” a soccer ball.
Even so, there is good reason to believe the primary engineering challenges of self-sufficiency and self-directedness in robots and other machine systems can be overcome in the next decade. What’s much harder, and may even exceed the limits of our design capability, is the development of context and an ability to understand intent, the Achilles’ heel of artificial intelligence.
In 1947, Isaac Asimov introduced his three laws of robotics, which continue to influence the research and development of robots today. Asimov correctly foresaw one of the greatest challenges of robotics: You could program a robot to behave logically, but it would still have difficulty doing the “right” thing.
Of course, in a military context, even humans are not completely autonomous. Humans must report to a chain of command and coordinate their activities with others. Preset boundaries, rules of engagement and regulations go a long way in defining and regulating behavior for both man and machine, but they are not sufficient. In other words, why would we design machines to a higher standard than humans?
This is a critical point from a policy perspective, because although one can choose to abdicate levels of decision-making to an autonomous machine, one cannot escape responsibility for the resulting actions. We could easily automate the launch of nuclear weapons on predetermined targets, for example, but we don’t because the consequences of a machine error are unacceptable.
HOW TECHNOLOGY CAN HELP — AND HURT
So with some level of human control inevitable, we need to look at how human-robot interactions work. In 2007, researchers led a comprehensive field study to examine how such operations succeeded and failed. What they found is startling.
In systems with low autonomy, like those that are completely teleoperated, the machines can easily become a burden on their human counterpart. The human serves essentially as the robot’s caretaker and is unavailable for any other task while performing this role.
As self-directedness and autonomy increase, humans are freed to concentrate on higher-level tasks. But the self-directedness of the robot often exceeds its competence, and humans may trust the system more than they should.
A perfect example occurred in 1988, when the guided-missile cruiser Vincennes shot down a civilian airliner from Iran. The ship’s Aegis system identified the airliner as a fighter jet, and despite hard data to the contrary, the fatigued crew was reluctant to override the computer’s conclusion.
There are conflicting opinions on who was at fault in that unfortunate incident: the humans or the computer. Most agree the Aegis system had not been designed to perform in such situations, and also that the crew engaged in “scenario fulfillment,” unconsciously changing information to match patterns they were expecting.
It is tempting to believe the solution is to simply engineer more autonomy. But computer science expert Robin Murphy’s law of robotics, derived from multiple studies of search-and-rescue operations, states: Any deployment of robotic systems will fall short of the target level of autonomy, creating or exacerbating a shortfall in mechanisms for coordination with human stakeholders.
In highly autonomous systems, where both self-directedness and self-sufficiency are present, the system becomes opaque. Operators frequently ask questions such as: What is it doing? Why is it doing that? What’s it going to do next?
DESIGNING FOR INTERDEPENDENCE
Warner Dahm, the former Air Force chief scientist, noted in the service’s recent “Technology Horizons” report, “Although humans today remain more capable than machines for many tasks, natural human capacities are becoming increasingly mismatched to the enormous data volumes, processing capabilities, and decision speeds that technologies offer or demand; closer human-machine coupling and augmentation of human performance will become possible and essential.”
This highlights how increasing a system’s autonomy without also designing for interdependence often fails to improve mission performance. We can simply no longer afford to develop technological solutions that don’t consider the human role in using them. It’s an old refrain, but one the engineering community, and to some degree the military, has been reluctant to embrace.
The design parameters for an interdependent human-machine system look very different than a machine designed to maximize autonomy.
Matt Johnson at the Institute for Human and Machine Cognition proposes that interdependent systems should possess mutual awareness (context), consideration (adjustability) and the capability to support (reciprocation). This means we must design systems such that a machine not only provides support for others’ dependence on it but can also deal with its own dependence on others.
Thus we can look at intelligence, surveillance and reconnaissance platforms in a new light. To date, the general philosophy has been one of more: more sensors, more video streams, more data. There is no doubt we still lack the data necessary to realize the Air Force science and technology vision of “anticipate, find, fix, track, target, engage, assess … anyone, anytime, anywhere.” This has driven substantial, ongoing research to make surveillance platforms nearly or completely autonomous from the standpoint of the pilot in order to continue to increase the number of “eyes in the sky.”
But it’s clear that adding more autonomy or increasing utilization of such a system creates as many problems as it solves. At a recent autonomy workshop sponsored by the Air Force Research Laboratory, an Air Combat Command representative said, “Increases in collection capacity presents the major challenge of our era: We must still consider the application.”
After all, the goal isn’t to see everything, but to find a target using the fewest signatures and resources resulting in the greatest confidence. To do that, we have to imagine a more interdependent system that changes how we task, collect and analyze data.
Such a system could match resources and make decisions based on the pace of operations, shifting seamlessly from human to machine control and back again. It could combine data mining across collection methods (for example, social networks or public records) with a human’s contextual understanding (for example, noticing trucks with mattresses that lead to improvised explosive device building sites), allowing for more meaningful and efficient detection of anomalies.
In the short term, researchers are working hard to have one pilot control multiple vehicles. In the long term, the goal is to reduce the number of vehicles that must be controlled, not just due to advances in autonomy, but advances throughout the intelligence collection process. This means there needs to be a direct link between scientific and engineering accomplishments and the development of concepts of operations and techniques, tactics and procedures.
RESEARCH PRIORITIES
While the advancements needed to increase the self-sufficiency and self-directed portions of autonomy are considerable, they are relatively well understood. But we must also look deeper into several other facets of man-machine interdependence:
Adjusting the human/machine dynamic. One of the struggles is determining whether machines are relegated primarily to the role of decision aids or whether we want them to be decision-makers in their own right. If the latter, then there are challenges in terms of communication and the ability to understand intent. Humans are notorious for giving imprecise instructions to one another, but when interpreted by the literal and logical “brains” of an autonomous robot, this becomes even tougher. As the Aegis example shows, the level of autonomy must change as the situation does.
Reducing complexity in presenting data. Zooming interfaces that allow analysts to compare and contrast across time and distance will permit correlations of activities at unprecedented levels. Combined with the data mining capabilities of machines, this can create a more holistic picture and increase confidence. The difficulty is presenting the data in a manner understandable by humans. This involves research in predicting visualization perspectives that reduce complexity. For example, in space situational awareness, it is common to put the viewer 1,000 miles in space to appreciate the tens of thousands of satellites in orbit around the Earth. We must continue to find visualization methods that better match the machine’s ability to correlate with the human’s need to simplify.
Developing trust in automation. As the Air Force’s “Technology Horizons” report states, “developing methods for establishing ‘certifiable trust in autonomous systems’ is the single greatest technical barrier that must be overcome to obtain the capability advantages that are achievable by increasing use of autonomous systems.” There is a delicate balance to maintain as well. Too much, and humans fail to serve as reliable caretakers. Too little, and the system will fall into disuse. In an interdependent system, this is even more crucial because the machine is now a teammate, not a tool. We may abide, barely, the spinning circle that appears when our Windows operating system is locked in some unknown task, but this kind of behavior is intolerable in a teammate. Unfortunately, highly predictable systems are also vulnerable. In the end, it’s not just engineering transparency but also a difficult optimization problem.
Improving human performance. In 1984, Aryeh Finegold said, “One of the big problems is the tendency for the machine to dominate the human.” Sadly, despite our progress over the last several decades, this is still true. There is tremendous need and opportunity to improve the human side of the equation. Technology can improve the speed and quality of decision-making, increase the bandwidth of information uptake while simultaneously improving retention and decrease frailty in humans.
Peter Singer has said about the growing use of robots and other autonomous systems in war: “When historians look back at this period, they may conclude that we are today at the start of the greatest revolution that warfare has seen since the introduction of atomic bombs.”
Just as atomic bombs didn’t solve our problems but changed their nature, so it is with intelligent machines and robots. Autonomy research alone will quickly increase the complexity between man and machine and exacerbate our manpower issues. Ultimately, the manpower savings from interdependent robots and machines comes not from replacing people but by changing how we accomplish our missions.
Autonomy research is still nascent and the opportunities for increased research in interdependence are ripe. But it’s important to dovetail these research priorities quickly. After all, who do we want to shape the nature of the new joint warfare: man or machine?
Recent Comments