In an excerpt¹ published by Salon entitled, Killer robots are coming next:The next military-industrial complex will involve real-life Terminators, Yale University’s Wendell Wallach, asks us to consider whether we, as a society, are ready and capable of navigating competently through the expected robot-entrenched and drone infested war zones of the near future. Competent navigation, importantly, requires addressing the question of limitations on “smart” weapons systems. Following in the footsteps of a proposed presidential executive order, Wallach suggests a ban, and short of a ban, “an international humanitarian principle that machines should not be making decisions about killing humans.” Time is of the essence because, as Wallach explains, “more and more functions are being turned over to computerized systems,” leaving humans out of the loop. The idea of an international ban on fully autonomous killing systems has gained international support but as with other humanitarian concerns, a continuous public voice, pushing for a ban, is required to keep the momentum going. Wallach appeals to our ethical selves, noting that “delegating life-and-death decisions to machines is immoral because machines cannot be held responsible for their actions.” However, with the proviso that robotic moral advancements should not be tested with autonomous lethal weapons, Wallach additionally provides, “[i]f and when robots become ethical actors that can be held responsible for their actions, we can then begin debating whether they are no longer machines and are deserving of some form of personhood” — perhaps opening the door, at that time (and not before), for such systems to become ethically qualified and eventually accepted as proxy military soldiers.
¹Excerpted from “A Dangerous Master: How to Keep Technology From Slipping Beyond Our Control” by Wendell Wallach. Published by Basic Books.