Visit our website
New America Cypbersecurity Initiative
New America Cypbersecurity Initiative
MIT Technology Review
MIT Technology Review
io9
io9
Techdirt
Techdirt
Knowledge@Wharton
Knowledge@Wharton
Bioscience Technology
Bioscience Technology
redOrbit
redOrbit
Technology & Marketing Law Blog
Technology & Marketing Law Blog
Popular Science Blog
Popular Science Blog
Pew Research Center
Pew Research Center
Genomics Law Report
Genomics Law Report
Science 2.0
Science 2.0
The Guardian Headquarters
The Guardian Headquarters
Genetic Literacy Project
Genetic Literacy Project
Disclaimer

Statements posted on this blog represent the views of individual authors and do not necessarily represent the views of the Center for Law Science & Innovation (which does not take positions on policy issues) or of the Sandra Day O'Connor College of Law or Arizona State University.

5th Annual Governance of Emerging Technologies Conference: Part 4

What follows is Part 4 of a series of select summaries of conference presentations, each prepared by 2L Jesse James, given at LSI‘s Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics, held at the Sandra Day O’Connor College of Law, Beus Center for Law & Society, Phoenix, AZ on May 17-19, 2017.

Plenary Session 2 – Responsible AI Development

Wendell Wallach of The Hastings Center and Yale University’s Interdisciplinary Center for Bioethics brings to light key issues related to responsible artificial intelligence (AI) development.  With respect to AI, he notes that the field is “generally one of anxiety, shaped by a concentrated set of personalities and private actors” primarily focused on ethics fueled by a sense of doom.  AI ethics touches many contemporary topics including cybersecurity, healthcare, employment, social media and so forth.  Wallach singles out “Seven Big Issues”: Cybersecurity & Cyberwarfare, Adversarial AI (i.e. bad actors), Algorithmic Bias, Transparency, Technological Unemployment, Lethal Autonomous Weapons and Superintelligence.

When it comes to AI oversight, Wallach favors “soft” governance (as opposed to legislation/regulation or other “hard” governance tools) using a nimble, flexible and adaptive approach.  He draws from Building Agile Governance for AI and Robotics where he, again, emphasizes, “[t]here is a need for new more agile and adaptive approaches to the governance of emerging technologies. These approaches must rely upon responsible research and innovation, industry standards and oversight, technological solutions to issues where feasible, foresight and planning, multi-stakeholder engagement, and public input to establish values and desired outcomes.”

Subbarao Kambhampati of ASU’s School of Computing, Informatics & Decision Systems Engineering speaks on critical issues with AI development, problems with current dialogues about AI systems, and how laws must adapt to our rapidly changing environment.

Kambhampati begins by outlining the necessities for the responsible creation of AI systems. The AI must be safety-critical, and must furthermore be fair, transparent, and accountable. Only then, he says, will there be healthy collaborations between people and AI systems.

But how does one accomplish these goals? That is something yet to be determined, and Kambhampati believes there is a lot of merit to providing venues and incentives for researchers to seriously plan avenues to achieve these objectives. He lauds AI grand challenges that encourage researchers to address some of the most pressing long-term social issues and believes that awards should be given to academics that make the biggest contributions to AI, people, and society. Furthermore, Kambhampati says much is to be gained through programs providing resources to researchers at non-profit organizations.

Kambhampati offers some of his personal perspectives on responsible research, as well. He says we live in very interesting times where it is, “All AI all the time,” and almost everyone is an expert. Unfortunately, because there’s a great degree of basic misunderstandings about the technology, AI itself gets credit and blame indiscriminately for things that are not its fault. There is a conflation of ‘autonomy’ and ‘intelligence’ that muddies the waters for understanding and discussion, and a frequent misassignment of fault regarding whether problems occur from bias in the AI’s algorithmic computing functions or rather from invalid or incomplete data sets that are the AI’s only options.

He concludes that law seems to be playing catch-up and needs to take into account new phenomena on the digital horizon. For mass numbers of people agreeing to software’s terms of service, how does one verify contracts that are generated on a per-interaction basis? Kambhampati closes, noting there are solutions such as meta verifying the space of contracts, but ultimately new perspectives are needed in law to provide imaginative solutions to our present problems.

John Havens of the Institute of Electrical and Electronics Engineers (IEEE) Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous systems,  spoke on the importance of ethics in advancing technology for humanity. IEEE has multiple operating units – code of ethics, professional ethics, technological ethics.  It came together in 2015 and is part of the IEEE Standards Association. The group published Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems in December 2016, which has been incredibly influential in the field and translated into many languages. They are presently working on Ethically Aligned Design 2.

Havens speaks about IEEE’s mandate and mission. Its mandate is to create “ethically aligned” autonomous and intelligent systems: Incorporate ethical aspects of human well-being that may not automatically be considered in the current design and manufacture of Artificial Super Intelligence (ASI) technologies. IEEE’s Mission is to ensure every technologist is educated, trained, and empowered — and to prioritize ethical considerations in the design and development of autonomous and intelligent systems.

Additionally, IEEE is pursuing creating global standards in lieu of a formal global governance structure. Havens outlines standards for project IEEE P7000, which was a model for addressing ethical concerns during system design.

IEE P7001 – transparency of autonomous systems

P7002 – Data Privacy Process

P7003- Algorithmic Bias Considerations

P7004- Standard on Child and Student Data Governance

P7005- Standard on Employer Data Governance

P7006- Standard on Personal Data  AI agent working Group

Havens concludes the discussion by cautioning that while standards systems can work within areas of governance gaps, there is undoubtedly an economic market aspect to their adoption. The standards must be useful, and if they are not workable for the corporate or NGO practitioners, the standards will never go into practice.

Spring Berman, of ASU’s Ira A. Fulton Schools of Engineering discusses swarm robotics and recent developments within the field. She outlines the applications of swarm robotics (from monitoring and patrolling to biomedical applications at the micro or nano scale), reveals technical challenges for swarm robotics, such as denied environments, and illuminates the audience on recent advances in robotic swarms such as unmanned aerial vehicles and autonomous cars.

Berman lists some of the fields that will benefit greatly from swarm robotics: environmental sensing, cartography, exploration, monitoring / patrolling, disaster response, search and rescue, containing oil spills, mine clearance, agriculture, mining, manufacturing, construction, warehousing, transportation, delivery, surveillance and reconnaissance, explosive ordnance disposal, targeted cancer therapy and cell structure imaging (at the micro or nanoscale).

But, there are some technical challenges for swarm robotics, as Berman acknowledges. It is difficult to scale up, and infeasible to plan and monitor the actions of hundreds to thousands of robots. Theoretical guarantees are required on performance, as well as adaptability to changing conditions and objectives in real time. There are some problems as well with human-swarm interaction, specifically, the ability to direct swarms with precision, which user interface is most intuitive, cognitive load, and user training. Swarms are additionally not robust: their long-duration autonomy, operation in the presence of errors and individual robot failures, verification of autonomous and learning systems, and non-deterministic behaviors leaves much to be desired. But lastly, swarm robots suffer in denied environments where they are unable to access GPS, have limited communication bandwidth to a central controller, or exceed their range.

Berman excitedly lists enabling technologies that may allow swarm robotics to surmount these obstacles such as new sensors, smart materials, advances in communication & freespace optics, deep learning, cloud computing, virtual and augmented reality, the internet of things, new possible user interface with direct brain control, synthetic biology and advances in analyzing natural swarm activity. She also has cause for hope from the recent advances in robotic swarms such as increased decision-making under uncertainty and better decentralized information gathering and data fusion.

She concludes by examining the role of governance where autonomous swarm decision making is applied. Without a doubt, Lethal Autonomous Weapon Systems could pose serious problems, and a 2015 open letter from members of the AI community called for a ban on all autonomous weapons beyond meaningful human control (how far away in distance and time a human must be).  Berman also argues this meaningful human control doctrine should be applied to other areas such as safety, autonomous cars, privacy and security. Thus, the government and academia should support and incentivize interdisciplinary research and remove legal barriers that prevent research and replace them with validating systems for the purpose of accountability and safety.

Stuart Russell of U.C. Berkeley’s EECS returns for a short recap, maintaining a stance that the time is now to start solving  future problems such as AI.

Russell contends we’re too late to the game to regulate, and that we’re always too late to the game. In 1933, Ernest Rutherford, a leading nuclear physicist gave a speech saying that based on his knowledge of the field, there was no way we would ever unlock the energy contained in atoms, and that there was no need to be concerned with atomic bombs. Likewise, we knew the basic science behind climate change in the late 19th century. We could have changed our reliance on fossil fuels then as the automobile was being manufactured, but can’t do it now because of the vested interest. At the moment, no  national defense relies on autonomous weapons, and we did get an agreement at Geneva that would not happen…

With regret, there are no photos available from this session.