Technological capabilities have already placed semiautonomous robots in a number of key military roles. New developments will expand their capabilities and applications. Who is morally responsible for actions conducted by military robots in wartime operations?
To examine this question, I use futurist Peter Schwartz’s planning concept of using multiple, hypothetical scenarios to describe possible issues in our future. These two scenarios reveal some of the challenges and possible ethical realities we will face in the future of robotic warfare. Scenario 1 focuses on the ethical issue of robots as weapons used to kill humans; Scenario 2 focuses on the idea of using fully autonomous robots as replacements for soldiers.
SCENARIO 1: A TOOL BY ANY OTHER NAME
In 2025, there is slow-burning war between the developed nations and struggling, undeveloped nations. Information connectivity has reduced the power of the governments in the undeveloped nations in relation to tribal and religious-based organizations.
The value of individual life is radically different from one region to the next. In the developed world, the political history and cultural developments have created a strong belief in each citizen’s individual rights. Current research and looming poten¬tial breakthroughs have significantly reinforced society’s view of the value of an individual’s life. An almost religious fanaticism drives a commonly held belief that immortality in terms of age and disease may become possible within the next 50 years
Meanwhile, financial barriers to medical care combined with longstanding cultural beliefs have led to massive over¬population in the underdeveloped nations as a result of the desire for big families. Large populations, short life expectancy, dismal economic prospects and political upheaval have led to little value being placed on an individual’s life. Death is not only expected at an early age, but also often embraced as a potential gateway to various types of wonderful afterlife exis¬tences and/or a path to increased family honor and wealth. The needs of the tribe or faith are considered pre-eminent over one individual’s life.
The difference in the perceptions of life’s value has con¬tributed strongly to the expanded use of military robots by the developed nations. Combative forces in the underdeveloped world willingly sacrifice dozens of their own soldiers in exchange for a single enemy soldier’s life. At the same time, the developed world’s value of life is a double-edged sword for the soldiers on the ground because their standards of conduct are far stricter than their enemies’. Semiautonomous robots are cost-effective in replacing humans in raw financial equations and in certain capabilities. Robots do not tire, feel fear or suffer from emotional trauma. These drivers have expanded the use of military robots beyond their early, limited roles as aerial drones and explosive ordnance disposal devices into automat¬ed vehicle controls, perimeter defense systems, patrolling and reconnaissance vehicles, and even direct-action weapons sys¬tems designed to cover and protect soldiers in firefights.
An ethical crisis has recently developed in the East African city of Mogadishu. An Australian infantry squad on a routine patrol through a market came under fire from personnel of unknown affiliation. With their robotic support, the squad identified and killed four of five attackers. The robots were programmed to use their “defend only” mode, but during the two-minute firefight, a robot killed a civilian. The sensory record of the fight downloaded from the robot shows a clear field of fire at a positively identified enemy combatant, but just as the robot fired its weapon, a civilian moved into the line of fire and was killed almost instantly.
An investigation is launched, but a low-quality video of the robot killing spreads across the Internet, originated from Web sites known to support anti-coalition religious groups. The first sharp rebukes come from a variety of anti-war and human rights nongovernmental organizations, as well as multiple anti-robot groups. There are multiple calls for the withdrawal of all slaughter machines and deathbots on the grounds that it is immoral to allow a robot to decide to kill a human. Religious leaders are split based on the regions in which they live. Those in the developed nations declare that robots have no soul and, therefore, any morals they possess are reflections of their pro¬grammers’ beliefs. Those in the undeveloped world declare robots evil machines of destruction unleashed upon innocent civilians. The U.N. Secretary General orders an independent investigation, which could have far-reaching impact because it is the strongest political bridge between the separate regions of the world.
The investigation reveals a remarkably clear picture of the events. The Australian soldiers have varying accounts of the details but generally agree it was a typical ambush. The data col¬lected from the robots is incredibly detailed and shows a much more elaborate series of events. The video and thermal record¬ings allow investigators to recreate the event through a comput¬er simulation. As the simulation plays out, it shows how every¬one in the market took cover except the civilian killed, who crouched in front and below the robot’s weapons. Then he shift¬ed to the side, placing himself in line with the robot’s weapons and the ambushing party, moving back and forth to match the robot as it repositioned itself to best protect the Australian troops. Just as the robot’s weapon stopped moving and began to fire, the civilian leapt head first into the stream of bullets.
Adding to the evidence of a setup is the analysis of the acoustic sensors, which reveals that as the victim came into position below the ambushers’ field of fire, they shifted their fire onto other targets. Because the robot was the closest threat and easiest target, the Australian government argues that attackers were acting to ensure only the robot’s rounds would strike the victim. This is reinforced by recorded evidence that the attackers immediately called out to withdraw after the victim was killed, including words that translate as “God save the martyr.”
The final U.N. findings of the investigation conclude the death was not intentional. The sensory evidence over¬whelmingly points to the actions of the victim being per¬formed in conjunction with the attackers. The investigators conclude that the individual performed a suicide attack with the intention of influencing the political situation of the con¬flict through martyrdom. They also conclude that the robots’ programming was appropriate for the circumstances and fit within the established rules of engagement; therefore, no charges are to be filed against the Australian commander.
Even as military robots change the application of warfare in a variety of ways, they are still essentially tools. In their most sophisticated versions, they are designed to perform certain acts without an immediate human in the decision loop. But they are still programmed before every mission with the limitations desired by the commander on the ground. Thus, they represent the most advanced weapons ever created but are only as useful as the soldier who wields them. All moral responsibility for their acts lies solely with human directing them.
SCENARIO 2: I THINK, THEREFORE I FIGHT
In 2007, the South Korea government installed the first of a series of semiautonomous, fixed-position robots in the Demilitarized Zone. These robots were equipped with a wide variety of weapons, lethal and less-than-lethal, and were designed to engage targets within pre-programmed parame¬ters. Although limited in decision-making options, it repre¬sented the first military application of such technology.
The same year, a consumer could purchase a Lexus LS sedan that parallel parked itself, an iRobot Corp. self-directing vacuum or a robotic pet iDog. To reduce production costs, commercial manufacturing had long been using assembly lines manned by programmed robots. A plan for future human-to-robot interac¬tion was also developed by the European Robotics Research Network’s Roboethics Roadmap and through the Robot Ethics Charter, independently created by the government of South Korea. These developments represented progressive steps in the overall acceptance of robots in day-to-day life by humans, first in the industrial application, then in commercial consumerism and, finally, in direct military engagements.
By 2025, robot teaching assistants are widely used in a vari¬ety of public and private schools. They provide educational assistance through interactive tutoring that is tailored to each child’s specific goals, strengths, weaknesses and personality. Many families use similar robots — officially as in-home tutors (wirelessly linked to the school’s database), but often as nan¬nies. Robots also have taken over a variety of mundane house¬hold tasks such as cleaning, dusting, laundry and yard mainte¬nance. There are even automated garages for in-home vehicle maintenance. There is at least one robot for every American household.
By 2030, security robots are widely used as partners for human police officers. Initially, these robots assist with com¬munications and identification from the squad cars. Over time, multiple ground and aerial versions are launched and controlled from the squad car that directs every asset to best support the officer. The lower risk of injury to both the officers and criminals eventually switches the roles, placing the robots in initial contact and the officers supporting. A positive devel¬opment of this switch is a sharp increase in conviction rates as a result of the robots’ highly accurate sensory equipment. This causes a growing need for prisons and their requisite guards, most of which, again, are variations of the autonomous robots. With their 24-hour operational capacity and inability to be affected by bribes or threats, conditions in American prisons dramatically improve.
Military applications run parallel to law enforcement; robots are used to maximize sensor capability and perimeter defense, and to augment military personnel. The combination of increased capability, reduced costs and lower risk drives military robots to become more widespread throughout the globe. A robust software capability is as important as hard¬ware. This specifically relates to how fast a robot can process data and react appropriately. The better the artificial intelli¬gence capability, the faster the robot can “think” and react to tactical situations.
In 2035, robotic software has developed artificial intelli¬gence to such a degree that robots are being used as personal assistants by politicians, diplomats and corporate executives. Robots’ abilities to communicate in all languages and via mul¬tiple information conduits, combined with access to the wealth of information available on the Internet, has made them irreplaceable.
But concerns have been raised that robotic assistants are making more decisions in positions of power that affect vast numbers of human beings. These robots are widely perceived to be unemotional and soulless and should not be allowed to make decisions at all, but rather be restricted to supporting roles.
Top military commanders have found these types of robots to be essential in managing the vast data flow through a com¬mand center. With air-, land-, sea- and space-based units all continuously linked in real time, it has become impossible for humans to process this vast quantity of data in any kind of useful time period. The commander’s assistance robots are not only capable of processing the data, but also can do so while mobile and while providing local security to the com¬mand center. To maximize the robots’ potential, they have been programmed to organize data based on parameters given by the commander and also to learn from the comman¬der’s actions how best to react to data of equal priority and/or data that is not well-defined. These robotic assistants can anticipate their commanders’ courses of action with remark¬able accuracy.
In the 2036 battle of Cúcuta, Colombia, U.S. forces assist the Republic of Colombia in destroying a drug cartel’s stronghold. The role of robotic assistants changes forever when Lt. Gen. John Smith is critically injured during a pivotal moment in the battle and is unable to continue directing the allied forces. His robotic command center assistant continues to issue orders in the lieutenant general’s name via digital message traffic and voice recordings. The bat¬tle continues to be fought according to Smith’s plan, resulting in an allied victory.
After the details of the battle are leaked, there are numerous statements from a wide variety of organizations for and against the robot assistant’s actions. The Army’s inspector general begins an investigation, which reveals that the robot’s action violated the standard operating procedures regarding proper succession of command but was within Smith’s standing guidance on actions during a gap in command. If a human had made the same choices under the same conditions, it would represent a potential but not entirely clear violation of orders. Far more interesting, it becomes clear that the robot was making deci¬sions based on its individual outlook developed from stored experiences. For all intents and purposes, the robot was think¬ing for itself. As a consequence, the investigators conclude that the robot was responsible for both its actions taken during the battle and a wide variety of actions it had taken over many previous years.
This marks the first time a robot is held completely accountable for its actions. Although these results are viewed with outrage by some, the vast majority of Americans show no signs of concern because thinking robots have become a daily part of life. Politicians announce a sweeping change in the acceptance of artificial intelligence. The debates over a robot bill of rights begin. The Defense Department reviews its poli¬cies and procedures regarding military robots, to change their status from mechanical systems to artificially intelligent sol¬diers, sailors, Marines and airmen. Smith’s assistant robot is reprimanded for failing to facilitate the proper succession of command, and awarded a medal for actions in combat. It is still pondering the significance of these two opposing actions.
In this scenario, independent intelligence developed despite fail-safes generated by the programmers. Having the ability to learn from its experiences, as well as a commander’s prefer¬ences, would be insufficient for it to be truly autonomous. It would have to be able to take the final step of violating its inherent program¬ming to act in a manner that it individu¬ally determined was the best course of action. Assuming that it remains incapable of developing emotion and/or ego, it is like¬ly that the robot would still act to promote the greatest good. The moral difficulties would lie in decisions for which all the options are bad, but one must be chosen. How it would make such decisions would likely be based on the weights its learned-value system placed on the various outcomes. This would represent an artificial intelligence that would be held responsible for its actions.
BACK TO THE PRESENT
There are no new ethical dilemmas created by robots being used today. Every system is either under continuous, direct control by humans, or performing limited semiautonomous actions that are dictated by humans. These robots represent nothing more than highly sophisticated weapons, and the user retains full responsibility for their implementation.
The potential for a new ethical dilemma to emerge comes from the approaching capability to create completely auton¬omous robots. As we advance across the field of possibilities from advanced weapons to semiautonomous weapons to completely autonomous weapons, we need to understand the ethical implications involved in building robots that can make independent decisions. We must develop a distinction between weapons that augment our soldiers and those that can become soldiers. Determining where to place responsi¬bility can begin only with a clear definition of who is making the decisions.
It is unethical to create a fully autonomous military robot endowed with the ability to make independent decisions unless it is designed to screen its decisions through a sound moral framework. Without the moral framework, its creator and operator will always be the focus of responsibility for the robot’s actions. With or without a moral framework, a fully autonomous decision-maker will be responsible for its actions. For it to make the best moral decisions, it must be equipped with guilt parameters that guide its decision-making cycle while inhibiting its ability to make wrong decisions. Robots must represent our best philosophy or remain in the category of our greatest tools.
Maj. David F. Bigelow is studying for a master’s degree in business administration under a combined detachment with the University of Texas in Arlington and Lockheed Martin Missiles and Fire Control in Dallas-Fort Worth. He reports for his next assignment, with the 401st Army Field Support Brigade in Kuwait, in November.
Recent Comments