Visit our website
New America Cypbersecurity Initiative
New America Cypbersecurity Initiative
MIT Technology Review
MIT Technology Review
io9
io9
Techdirt
Techdirt
Knowledge@Wharton
Knowledge@Wharton
Bioscience Technology
Bioscience Technology
redOrbit
redOrbit
Technology & Marketing Law Blog
Technology & Marketing Law Blog
Popular Science Blog
Popular Science Blog
Pew Research Center
Pew Research Center
Genomics Law Report
Genomics Law Report
Science 2.0
Science 2.0
The Guardian Headquarters
The Guardian Headquarters
Genetic Literacy Project
Genetic Literacy Project
Disclaimer

Statements posted on this blog represent the views of individual authors and do not necessarily represent the views of the Center for Law Science & Innovation (which does not take positions on policy issues) or of the Sandra Day O'Connor College of Law or Arizona State University.

Worldwide Web Watch

WWWearth

In an excerpt¹ published by Salon entitled, Killer robots are coming next:The next military-industrial complex will involve real-life Terminators, Yale University’s Wendell Wallach, asks us to consider whether we, as a society, are ready and capable of navigating competently through the expected robot-entrenched and drone infested war zones of the near future.  Competent navigation, importantly, requires addressing the question of limitations on “smart” weapons systems.   Following in the footsteps of a proposed presidential executive order, Wallach suggests a ban, and short of a ban, “an international humanitarian principle that machines should not be making decisions about killing humans.”  Time is of the essence because, as Wallach explains, “more and more functions are being turned over to computerized systems,” leaving humans out of the loop.  The idea of an international ban on fully autonomous killing systems has gained international support but as with other humanitarian concerns, a continuous public voice, pushing for a ban, is required to keep the momentum going.  Wallach appeals to our ethical selves, noting that “delegating life-and-death decisions to machines is immoral because machines cannot be held responsible for their actions.”  However, with the proviso that robotic moral advancements should not be tested with autonomous lethal weapons, Wallach additionally provides, “[i]f and when robots become ethical actors that can be held responsible for their actions, we can then begin debating whether they are no longer machines and are deserving of some form of personhood” — perhaps opening the door, at that time (and not before), for such systems to become ethically qualified and eventually accepted as proxy military soldiers.

¹Excerpted from “A Dangerous Master: How to Keep Technology From Slipping Beyond Our Control” by Wendell Wallach. Published by Basic Books.