Visit our website
New America Cypbersecurity Initiative
New America Cypbersecurity Initiative
MIT Technology Review
MIT Technology Review
io9
io9
Techdirt
Techdirt
Knowledge@Wharton
Knowledge@Wharton
Bioscience Technology
Bioscience Technology
redOrbit
redOrbit
Technology & Marketing Law Blog
Technology & Marketing Law Blog
Popular Science Blog
Popular Science Blog
Pew Research Center
Pew Research Center
Genomics Law Report
Genomics Law Report
Science 2.0
Science 2.0
The Guardian Headquarters
The Guardian Headquarters
Genetic Literacy Project
Genetic Literacy Project
Disclaimer

Statements posted on this blog represent the views of individual authors and do not necessarily represent the views of the Center for Law Science & Innovation (which does not take positions on policy issues) or of the Sandra Day O'Connor College of Law or Arizona State University.

5th Annual Governance of Emerging Technologies Conference: Part 2

What follows is Part 2 of a series of select summaries of conference presentations, each prepared by 2L Jesse James, at LSI‘s Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics, held at the Sandra Day O’Connor College of Law, Beus Center for Law & Society in Phoenix, AZ on May 17-19, 2017.

Day 1: Introductory Remarks

Gary Marchant, Regents’ Professor and Faculty Director of the Center for Law, Science & Innovation (LSI) at Arizona State University’s (ASU)  Sandra Day O’Connor College of Law and Brad Allenby, LSI Faculty Fellow, Distinguished Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability (ASU) and President’s Professor and Lincoln Professor of Engineering and Ethics, School of Sustainable Engineering and the Built Environment, Ira A. Fulton Schools of Engineering (ASU) kicked off the conference by discussing the implications of new technologies, focusing the audience’s attention on ways to ensure their positive inclusion and acceptance through many differing avenues of governance. They expertly set the stage at ASU’s fifth annual Conference on Governance of Emerging Technologies for speakers to delve into the depths of emerging technologies, analyze trends to predict future outcomes, and offer innovative new solutions.

Plenary 1

Moderated by LSI Faculty Fellow James Weinstein, the opening session on Emerging Technologies and Existential/Catastrophic Risks, gave speakers an opportunity to define catastrophic or existential risks, identify technologies that may engage these risks, and offered novel solutions to lower the risk to an optimal benefit to potential harm ratio.

Anthony Barrett of the Global Catastrophic Risk Institute (GCRI) presents “Modeling Pathways to Artificial Superintelligence (ASI) Catastrophe for Risk and Decision Analysis.” Defining catastrophic or existential risk as any activity that presents a very real possibility of obliterating or decimating the global human population, Barrett foresees an ASI (a self-aware machine surpassing the cognitive abilities of any human) offering many positive impacts on society, but also posing existential/catastrophic risks. After all, an ASI may by some arcane calculation conclude that all humans must be exterminated. But how does one evaluate such a risk? Barrett outlines a system.

Barrett explains it is difficult to direct ASI development in any particular way because the benefits and hazards are presently poorly characterized and quantified. He goes on to describe three fundamental problems that may be applicable to all catastrophic risks. First, the catastrophic and existential risks – major risks to humanity are difficult to conceptualize. One cannot adequately judge a risk if one does not appreciate the risk or comprehend its consequences. Second, the probabilities are especially uncertain because of a long time factor. For an ASI to be built and thereafter raise a fist to humanity, is a process that may take decades to centuries, where nearly anything could happen. Third, the directly applicable data is very limited. An ASI has not yet been built, and humanity itself has not seen catastrophes of the scale it could unleash. Thus, ASI development cannot seemingly be influenced or governed effectively.

However, Barrett’s organization, GCRI, seeks to change this. The entity’s leadership recognizes that the stakes are extremely high and believes existential threats merit attention despite uncertainties. In order to adequately assess risk, GCRI systematically breaks down big questions into manageable bits to be more easily manipulated. The big question at hand is what is the probability of a catastrophic scenario, and how can beneficial outcomes be realized?

GCRI applies a system for evaluating emerging technology risks. The organization constructs either fault tree models (working backward in time from a catastrophic point) using available information) or tech development models (working forward from the current year) to identify problems. From these catastrophe models catastrophe enabling triggers can be identified and the probabilities of dates or developments via direct or indirect tech models may be estimated.

Barrett presents a basic ASI scenario fault-tree to the audience:

  1. Seed AI is constructed with the ability to undergo recursive self-improvement
  2. Seed AI undergoes self improvement, resulting in building more intelligent AI
  3. Takeoff results in one (or more) ASI
  4. ASI gain decisive strategic advantage over humanity
  5. Ultimate outcome depends on ASI actions (extinction or not)

After completing a fault tree, the next steps are to update literature review and synthesis, gather more data, recruit elicitees with relevant expertise, test expert elicitation materials and questions, conduct and document elicitations; analyses; and forecasts. Then, the information can be used to engage stakeholders and the public to inform decisions. Some may be skeptical that resources should be spent on researching a problem that could be “far-fetched” or “far-away”, but Barrett closes strongly, noting that some things can be done now that will have significant risk reductions in the future, and that the value of strategic foresight is not in the predicting itself, but in informing and allowing policymakers to respond and anticipate.

Haydn Belfield of the Centre of the Study of Existential Risk, University of Cambridge spoke on disaster planning for geoengineering research. Belfield was part of a project sponsored by the Finnish government to produce a report outlining what the international community could do to reduce global or catastrophic risk. For the report, 107 identified recommendations were narrowed down to just 3, which Belfield argues passionately for: First, governance of geoengineering scenario planning research must be developed. Second, more research must be done in general. Third, there must be more funding to these critical areas.

Belfield articulates the difference between existential risks (total human extinction) and global catastrophic risk (10% of the world’s human population), and identifies some of the most pertinent. Asteroids are a major risk, as evidenced by the mass extinction event that killed the dinosaurs. The KT boundary, a dark line in the Earth’s geology which no dinosaur fossils are found above, serves as a grim reminder of the destructive power asteroids can wield. But existential risks need not be natural, they can also be man-made. Nuclear war and nuclear winter, for example, could easily render humanity extinct. Belfield mentions climate change could have a similar effect as well, but declines to speculate. In the last 10-20 years, 2 emerging technologies also pose extreme risks: AI development and synthetic biology.

In an attempt to discover how to govern emerging technologies to maximize benefits and minimize global existential risk, the Minister of Foreign Affairs of Finland tasked Belfield and her team to produce a report on what the international community can do to reduce risk. Holding different conferences, and scoring recommendations by how likely to be realized, how effective, and how practical the recommendations were, 107 recommendations were narrowed to 17. Over time, 17 were narrowed to six. Then from six, three present and important potential courses of action were identified: First, governance of geoengineering scenario planning research must be developed. Second, more research must be done. Third, more funding must be had.

These three recommendations dealt with catastrophic risk overall, but also shed an unexpected light on geoengineering research. Belfield believes current geoengineering research does not look upon itself with a lens for existential risk. Much of it appears to be conducted without evaluating potential existential consequences. Sulfite seeding to reduce temperature or solar radiation management projects may accomplish their primary goal of cooling or shielding the planet, but these projects may also have unexpected tertiary effects or externalities that have a serious existential impact. For instance, sulfite seeding may cool the earth, but could also create a snowball effect, drastically reducing global temperatures. Additionally, if sulfite seeding is abruptly halted, there could be the possibility of a termination shock, leading to unpredictable temperature spikes worldwide.

Belfield argues this kind of research should not be conducted without adequate consideration of possible externalities leading to existential events, and analogizes to other types of scenario disaster planning. Pandemics receive a desirable level of attention – there are scenarios considered for Pandemic response regionally, nationally, and internationally. When it comes to geoengineering, Belfield says, “We need more of this kind of tabletop exercise.”

Margaret Kosal of the Sam Nunn School of International Affairs, at Georgia Institute of technology speaks on emerging technology and existential security risks, specifically technology’s role in politics and war as well as the role of politics and war in technology. Kosal argues that technology and war are inextricably interconnected, but the mode of governance in place today can undermine our ability to protect militarily against possible future existential threats.

To Kosal, technology and war are connected. Technological innovation frequently comes from the desire to conquer one’s foes or defend against conquerors. But these new technologies have reached a point at which mankind has created its own existential risks, like nuclear weapons. Kosal appraises emerging technology in a search for what could raise new existential & catastrophic risks, specifically, what might displace or challenge nuclear weapons in terms of their international strategic position. She hints there may be other technologies that are widely proliferated (such as the AK-47) that just don’t have the same international political standing.

For Kosal, these new, emerging geopolitical drivers must be identified to avoid surprise, as new technologies may not be defeatable by conventional arms. Indeed, she quotes General Robert Crone saying “What keeps me awake at night is are we going to miss the next big technological advance? And perhaps an enemy will have [it].” She cautions, we may run into something new that’s “not your grandfather’s biological weapons.”

Kosal also cautions that it isn’t just state actors, but nonstate actors that are researching technologies that have the potential for weaponization. These aren’t necessarily terrorist organizations, either, but international or corporate organizations that are largely involved in researching technology that serves a dual purpose. For instance, there now exists a much more effective botulinum toxin. It wasn’t created for a warfighting purpose, but instead to improve botox for plastic surgery.

“The difference between beneficial and dangerous research is often only one of intent,” Kosal says, describing a technology that could compete with nuclear weapons on strategic stability currently being researched by a Chinese team. In trying to change the germ line in embryos, the team used a CRISPR-cas9 system to edit the HBB gene which encodes the human Beta-globin protein. Of 86 injected, 71 survived and 28 were successfully spliced. However, there were many “off-target” mutations. Kosal urges the audience to imagine a threat assessment matrix of a gene edit that reduces the birthrate of a population, and argues it challenges the position of nuclear weapons.

The mode of governance today presents a huge challenge with uncertain capabilities and international regime changes, Korsal adds. Poorly designed and reactionary limits are likely to be ineffective overall, and undermine security interests for the US and its allies. To paraphrase General Crone, If we don’t discover the next big technological weapon because of misguided research limitations, who may? Korsal’s solution is to foster pro-active international scientific cooperation as a means to encourage beneficial use of technology.

Catherine Rhodes, of the University of Cambridge, presents on why existential and catastrophic risks are a special case for governance, the problems encountered, the foremost challenge being that technology will be limited reactionarily. She suggests the main solution is to promote responsible research and scientific practice.

Current governance models present a challenge of dealing with long-term global issues, and setting aside short-term interests. There are strong economic drivers for technological development and limited resources available to international governance systems.  Rhodes invites the audience to imagine the consequences of biological technologies, such as infectious disease, directly caused by humans either unintended or deliberately, through DNA synthesis and gene editing. Currently, these can only be effectively addressed at an international level by arranging treaties or through collaborative governance.

Rhodes believes the governance solution is not in restricting the research or manufacture of these technologies, but instead in educating scientists in responsible research and promoting responsible practice.