Monday, February 18, 2008

Civilizing the Killer Robot

Norbert Wiener Award-winner Bruce Schneier recently blogged about Ronald Arkin's article on developing ethical systems to guide killer robots in the application of deadly force. Mr. Arkin suggests that robots may actually make war more civilized, not less.

He suggests that robots may make better decisions in combat than humans and that they could essentially report humans who they violate the laws of war to a higher authority, his surveillance of war argument intrigues me.

[T]he trend is clear: warfare will continue and autonomous robots will ultimately be deployed in its conduct.

Given this, questions then arise regarding how these [robotic] systems can conform as well or better than our soldiers with respect to adherence to the existing Laws of War...

This is no simple task however. In the fog of war it is hard enough for a human to be able to effectively discriminate whether or not a target is legitimate. Fortunately for a variety of reasons, it may be anticipated, despite the current state of the art, that in the future autonomous robots may be able to perform better than humans [in ethical war fighting] under these conditions, for the following reasons:

1. The ability to act conservatively: i.e., they do not need to protect themselves in cases of low certainty of target identification. UxVs do not need to have self-preservation as a foremost drive, if at all. They can be used in a self-sacrificing manner if needed and appropriate without reservation by a commanding officer.

2. The eventual development and use of a broad range of robotic sensors better equipped for battlefield observations than humans’ currently possess.

3. They can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events. In addition, “Fear and hysteria are always latent in combat, often real, and they press us toward fearful measures and criminal behavior”. Autonomous agents need not suffer similarly.

4. Avoidance of the human psychological problem of “scenario fulfillment” is possible, a factor believed partly contributing to the downing of an Iranian Airliner by the USS Vincennes in 1988. This phenomena leads to distortion or neglect of contradictory information in stressful situations, where humans use new incoming information in ways that only fit their pre-existing belief patterns, a form of premature cognitive closure. Robots need not be vulnerable to such patterns of behavior.

5. [Robots] can integrate more information from more sources far faster before responding with lethal force than a human possibly could in real-time. This can arise from multiple remote sensors and intelligence (including human) sources, as part of the Army’s network-centric warfare concept and the concurrent development of the Global Information Grid.

6. When working in a team of combined human soldiers and autonomous systems, they have the potential capability of independently and objectively monitoring ethical behavior in the battlefield by all parties and reporting infractions that might be observed.

This presence [of robots] alone might possibly lead to a reduction in human ethical infractions. It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of. Unfortunately the trends in human behavior in the battlefield regarding adhering to legal and ethical requirements are questionable at best. |Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture|(emphasis added, citations omitted)
The comments on Schneier's blog suggest how difficult it may be to program autonomous machines, especially with lethal powers and suggests how hard it might be to draw a meaningful distinction between man and machine autonomy. Once the system becomes so remote from the user in terms of phenomonology, it is hard to disambiguate the user from the machine.

Through the use of augmented reality, the machine would overlay the screen with information, such as friend or foe. I can imagine a soldier killing people at a distance merely because the machine tells him that they are the enemy. Modern combined arms warfare as currently practiced, as I understand it, generally requires a Forward Observer to actually call in an airstrike in the vast majority of circumstances. But to truly remove the human observer from the airspace and send in machines with a bare minimum of human control seems vaguely troubling.

Cruise missles strike me as a case where there is no person available to verify the target, but they are employed far more rarely than artillery and air strikes (in part because of their great expense).

But I think our weapons systems are verging on a significant leap in their power and autonomy, for instance, this Danger Room post about the Israeli air defense system, Israel Eyes Thinking Machines to Fight 'Doomsday' Missile Strikes reinforces to me the seriousness of this topic.

I am both fascinated and terrified by the latest autonomous robot creations. They outstrip stupid machines like land mines the way the sun outstrips the stars.

Richard Morgan's novel Broken Angels is a fabulous exploration of combat between man and machines in the far future and some damn good military sci-fi. Iain Banks' novel Excession looks even farther into the future when humans are cared for by the machines they created, but which have far exceeded the limits of our primitive organic brains.

No comments: