Visit our website
New America Cypbersecurity Initiative
New America Cypbersecurity Initiative
MIT Technology Review
MIT Technology Review
io9
io9
Techdirt
Techdirt
Knowledge@Wharton
Knowledge@Wharton
Bioscience Technology
Bioscience Technology
redOrbit
redOrbit
Technology & Marketing Law Blog
Technology & Marketing Law Blog
Popular Science Blog
Popular Science Blog
Pew Research Center
Pew Research Center
Genomics Law Report
Genomics Law Report
Science 2.0
Science 2.0
The Guardian Headquarters
The Guardian Headquarters
Genetic Literacy Project
Genetic Literacy Project
Disclaimer

Statements posted on this blog represent the views of individual authors and do not necessarily represent the views of the Center for Law Science & Innovation (which does not take positions on policy issues) or of the Sandra Day O'Connor College of Law or Arizona State University.

Wednesday Web Watch for March 4, 2015

When it comes to autonomous lethal weapons, one of the big fears is that war is more likely to occur because such systems are less expensive and carry virtually no risk of injury to soldiers — but do carry potential catastrophic consequences.  However, South Korea’s Samsung SGR-A1 robot, standing guard at the DMZ, establishes that in certain cases, such systems actually lessen the chances of the idea of free spirited war turning into reality.  At least that’s what  Alexander Velez-Green, writing for Lawfare, says.  Velez-Green takes a hard look at the SGR-AI robot and the controversy around whether it is a “human on or in the loop” system.  In other words, is the robot capable of autonomous engagement or does it require a human go-ahead to fire?  According to Velez-Green, “we cannot be sure that the SGR-A1 has an autonomous function.  But we can confidently say that neither Samsung Techwin nor South Korea could admit to it even if it did.”  Nonetheless, he stands firm in his belief regarding the “stabilizing effect that systems like the SGR-A1 can bring to some conflict zones.”  Follow Velez-Green’s argument here.