5th Annual Governance of Emerging Technologies Conference: Part 4

What follows is Part 4 of a series of select summaries of conference presentations, each prepared by 2L Jesse James, given at LSI‘s Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics, held at the Sandra Day O’Connor College of Law, Beus Center for Law & Society, Phoenix, AZ on May 17-19, 2017.

Plenary Session 2 – Responsible AI Development

Wendell Wallach of The Hastings Center and Yale University’s Interdisciplinary Center for Bioethics brings to light key issues related to responsible artificial intelligence (AI) development.  With respect to AI, he notes that the field is “generally one of anxiety, shaped by a concentrated set of personalities and private actors” primarily focused on ethics fueled by a sense of doom.  AI ethics touches many contemporary topics including cybersecurity, healthcare, employment, social media and so forth.  Wallach singles out “Seven Big Issues”: Cybersecurity & Cyberwarfare, Adversarial AI (i.e. bad actors), Algorithmic Bias, Transparency, Technological Unemployment, Lethal Autonomous Weapons and Superintelligence.

When it comes to AI oversight, Wallach favors “soft” governance (as opposed to legislation/regulation or other “hard” governance tools) using a nimble, flexible and adaptive approach.  He draws from Building Agile Governance for AI and Robotics where he, again, emphasizes, “[t]here is a need for new more agile and adaptive approaches to the governance of emerging technologies. These approaches must rely upon responsible research and innovation, industry standards and oversight, technological solutions to issues where feasible, foresight and planning, multi-stakeholder engagement, and public input to establish values and desired outcomes.”

Subbarao Kambhampati of ASU’s School of Computing, Informatics & Decision Systems Engineering speaks on critical issues with AI development, problems with current dialogues about AI systems, and how laws must adapt to our rapidly changing environment.

Kambhampati begins by outlining the necessities for the responsible creation of AI systems. The AI must be safety-critical, and must furthermore be fair, transparent, and accountable. Only then, he says, will there be healthy collaborations between people and AI systems.

But how does one accomplish these goals? That is something yet to be determined, and Kambhampati believes there is a lot of merit to providing venues and incentives for researchers to seriously plan avenues to achieve these objectives. He lauds AI grand challenges that encourage researchers to address some of the most pressing long-term social issues and believes that awards should be given to academics that make the biggest contributions to AI, people, and society. Furthermore, Kambhampati says much is to be gained through programs providing resources to researchers at non-profit organizations.

Kambhampati offers some of his personal perspectives on responsible research, as well. He says we live in very interesting times where it is, “All AI all the time,” and almost everyone is an expert. Unfortunately, because there’s a great degree of basic misunderstandings about the technology, AI itself gets credit and blame indiscriminately for things that are not its fault. There is a conflation of ‘autonomy’ and ‘intelligence’ that muddies the waters for understanding and discussion, and a frequent misassignment of fault regarding whether problems occur from bias in the AI’s algorithmic computing functions or rather from invalid or incomplete data sets that are the AI’s only options.

He concludes that law seems to be playing catch-up and needs to take into account new phenomena on the digital horizon. For mass numbers of people agreeing to software’s terms of service, how does one verify contracts that are generated on a per-interaction basis? Kambhampati closes, noting there are solutions such as meta verifying the space of contracts, but ultimately new perspectives are needed in law to provide imaginative solutions to our present problems.

John Havens of the Institute of Electrical and Electronics Engineers (IEEE) Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous systems,  spoke on the importance of ethics in advancing technology for humanity. IEEE has multiple operating units – code of ethics, professional ethics, technological ethics.  It came together in 2015 and is part of the IEEE Standards Association. The group published Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems in December 2016, which has been incredibly influential in the field and translated into many languages. They are presently working on Ethically Aligned Design 2.

Havens speaks about IEEE’s mandate and mission. Its mandate is to create “ethically aligned” autonomous and intelligent systems: Incorporate ethical aspects of human well-being that may not automatically be considered in the current design and manufacture of Artificial Super Intelligence (ASI) technologies. IEEE’s Mission is to ensure every technologist is educated, trained, and empowered — and to prioritize ethical considerations in the design and development of autonomous and intelligent systems.

Additionally, IEEE is pursuing creating global standards in lieu of a formal global governance structure. Havens outlines standards for project IEEE P7000, which was a model for addressing ethical concerns during system design.

IEE P7001 – transparency of autonomous systems

P7002 – Data Privacy Process

P7003- Algorithmic Bias Considerations

P7004- Standard on Child and Student Data Governance

P7005- Standard on Employer Data Governance

P7006- Standard on Personal Data  AI agent working Group

Havens concludes the discussion by cautioning that while standards systems can work within areas of governance gaps, there is undoubtedly an economic market aspect to their adoption. The standards must be useful, and if they are not workable for the corporate or NGO practitioners, the standards will never go into practice.

Spring Berman, of ASU’s Ira A. Fulton Schools of Engineering discusses swarm robotics and recent developments within the field. She outlines the applications of swarm robotics (from monitoring and patrolling to biomedical applications at the micro or nano scale), reveals technical challenges for swarm robotics, such as denied environments, and illuminates the audience on recent advances in robotic swarms such as unmanned aerial vehicles and autonomous cars.

Berman lists some of the fields that will benefit greatly from swarm robotics: environmental sensing, cartography, exploration, monitoring / patrolling, disaster response, search and rescue, containing oil spills, mine clearance, agriculture, mining, manufacturing, construction, warehousing, transportation, delivery, surveillance and reconnaissance, explosive ordnance disposal, targeted cancer therapy and cell structure imaging (at the micro or nanoscale).

But, there are some technical challenges for swarm robotics, as Berman acknowledges. It is difficult to scale up, and infeasible to plan and monitor the actions of hundreds to thousands of robots. Theoretical guarantees are required on performance, as well as adaptability to changing conditions and objectives in real time. There are some problems as well with human-swarm interaction, specifically, the ability to direct swarms with precision, which user interface is most intuitive, cognitive load, and user training. Swarms are additionally not robust: their long-duration autonomy, operation in the presence of errors and individual robot failures, verification of autonomous and learning systems, and non-deterministic behaviors leaves much to be desired. But lastly, swarm robots suffer in denied environments where they are unable to access GPS, have limited communication bandwidth to a central controller, or exceed their range.

Berman excitedly lists enabling technologies that may allow swarm robotics to surmount these obstacles such as new sensors, smart materials, advances in communication & freespace optics, deep learning, cloud computing, virtual and augmented reality, the internet of things, new possible user interface with direct brain control, synthetic biology and advances in analyzing natural swarm activity. She also has cause for hope from the recent advances in robotic swarms such as increased decision-making under uncertainty and better decentralized information gathering and data fusion.

She concludes by examining the role of governance where autonomous swarm decision making is applied. Without a doubt, Lethal Autonomous Weapon Systems could pose serious problems, and a 2015 open letter from members of the AI community called for a ban on all autonomous weapons beyond meaningful human control (how far away in distance and time a human must be).  Berman also argues this meaningful human control doctrine should be applied to other areas such as safety, autonomous cars, privacy and security. Thus, the government and academia should support and incentivize interdisciplinary research and remove legal barriers that prevent research and replace them with validating systems for the purpose of accountability and safety.

Stuart Russell of U.C. Berkeley’s EECS returns for a short recap, maintaining a stance that the time is now to start solving  future problems such as AI.

Russell contends we’re too late to the game to regulate, and that we’re always too late to the game. In 1933, Ernest Rutherford, a leading nuclear physicist gave a speech saying that based on his knowledge of the field, there was no way we would ever unlock the energy contained in atoms, and that there was no need to be concerned with atomic bombs. Likewise, we knew the basic science behind climate change in the late 19th century. We could have changed our reliance on fossil fuels then as the automobile was being manufactured, but can’t do it now because of the vested interest. At the moment, no  national defense relies on autonomous weapons, and we did get an agreement at Geneva that would not happen…

With regret, there are no photos available from this session.



Posted in Uncategorized | Tagged , , , , , , , , , | Leave a comment

The Scientist: Marchant Quoted in Response to First Study Evaluating Genetic Sequencing for Healthy Individuals

For the first time, a randomized trial was conducted to evaluate how the use of whole-genome sequencing (WGS) in asymptomatic adults affects clinical care and patient outcomes.  The study was published Monday in the Annals of Internal Medicine and funded by the National Institutes of Health. Researchers found that WGS did reveal new molecular findings, and that many providers could sufficiently contend with genomic information, but there remains uncertainty as to the results’ clinical utility when weighed against costs such as patient anxiety and increased healthcare expenditures.

In an article covering the study’s release and results, The Scientist spoke to LSI’s Faculty Director Gary Marchant, who holds a Ph.D. in genetics. Though Professor Marchant acknowledged potential negatives such as insurance discrimination, his outlook is generally positive for the future of genetic testing. He notes that not only will 1-2% of large populations find a significant and potentially actionable medical result from WGS, which may be a justification in itself for including the test, but that all of the studies’ participants also had recessive carrier genes that can be important when planning to have children.

Read the full article here.

Posted in Uncategorized | Leave a comment

5th Annual Governance of Emerging Technologies Conference: Part 3

What follows is Part 3 of a series of select summaries of conference presentations, each prepared by 2L Jesse James, at LSI‘s Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics, held at the Sandra Day O’Connor College of Law, Beus Center for Law & Society in Phoenix, AZ on May 17-19, 2017.

Day 1: Keynote 1 

     Stuart Russsell

World-renowned computer scientist and artificial intelligence guru, Stuart Russell invigorated the audience with a talk about his article “Provably beneficial AI” in the book The Next Step: Exponential Life. He notes AI is a hot topic, and because of the existential risks involved, deserves nothing but cautious movement forward. He offers conceptual directives for AI to make it safe based entirely on uncertainty in a machine’s objectives. Russell contends, as long as humans have special status in an AI’s programming, and all their thinking revolves contextually about how to please humans, the AI should be safe.

The audience shifts in their seats a bit when Russell begins by saying the world “Go” champion is an AI.  The victorious software, AlphaGo, is Google’s Deep Learning bot that has played every great Go player in the world and beaten all.  Deep Learning revitalizes some concern about superintelligence, and Russell has put some thought into what programs must be in place to make sure AI is safe.

His premise is that, eventually, AI systems will make better (he admits a certain ambiguity to “better”) decisions than humans because they will be able to take into account much more information and look much farther into the future. The upside is that everything we have today is a product of intelligence, and possessing access to a significantly greater intelligence would be a step change in civilization and a possible new golden era. The downside, of course, is killer robots, and risks to employment — among other potential catastrophes.

Russell sees a potential “gorilla problem” as well – gorillas made something smarter than themselves but have nothing to show for it. But we should also feel honored, he says, quoting Alan Turing, “Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled.”

Another problem involved with the creation of advanced AI systems is outlined in Russell’s quotation of Norbert Wiener, “we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” Right now, our AI systems will do exactly what you ask them to, not think and say, “are you sure that’s what you want?” It’s incredibly difficult to forecast the future, and we don’t want to program an AI and end up with a King Midas situation of getting everything we’ve asked for.

Russell contends machines already have an inclination toward self preservation, which can potentially lead to problems. If a machine has a purpose, it necessarily has an appetite for self-preservation. That is, if one asks a machine to fetch coffee, it can’t do it if it’s turned off.  So, it will endeavor not to be turned off.  To prevent the machine from killing anyone who tries to turn it off, in its attempt to complete its task, Russell sees 2 problems – a misaligned objective, and the machine protecting itself from anyone who tries to interfere.

To solve these problems Russell and others set up the center for human-compatible AI to “reorient the general thrust of AI research toward provably beneficial systems” with three simple ideas on how to approach the problem. First, the AI’s only objective is to maximize the realization of human values (implicit preferences over complete lives). Second, the AI is initially uncertain about what those values are. Third, human behavior provides information about human values. To achieve this, the AI must be uncertain in its objectives, so observable human actions can give it further information about its objectives. Thus, humans have to actually come into the equation in the AI’s thinking, whereby humans (or “principles”) are given special status.

Russell differentiates between an Artificial General Intelligence (“AGI”), and a regular problem solver AI. For a regular problem solver AI, one assumes and defines a formal problem “F” that the machine can solve arbitrarily well. That machine is an “F”-solver, not an AGI. However, its program design may include subsystems of arbitrary intelligence, they just have to be connected, trained, and motivated in the right way. Recall the coffee-fetching robot: without subsystems of intelligence and uncertain objectives based on observable human behavior, when asked to fetch coffee and someone attempts to turn it off, the AI might disable the off switch and taser all the employees at Starbucks® to get the coffee. But with uncertain objectives based on observable human behavior, the AI will think “the human might switch me off, but only if I’m doing something wrong. I don’t know what wrong is, but I know I don’t want to do it. Therefore, I should let the human switch me off.

This does involve value alignment issues, Russell admits. Humans are nasty, irrational, inconsistent, weak willed, computationally limited, incredibly complex, heterogeneous, and may not have an objective in any meaningful sense. But the AI will not act like those it observes, it is purely altruistic and cares about everyone. It is learning to predict what people want, not learning to want it. And if someone wants others to suffer, the AI may have an ingrained sign of altruism. Weight of preference also needs to be given to the AI. Russell invites the audience to imagine a scenario in which a man is about to meet with an important client.  His AI assistant reminds him he promised his wife an anniversary dinner at the same time as the meeting. If too much weight is given to the individual man, the AI may come to the rescue and say, “I’ve managed to delay your client’s flight so you can have dinner with your wife.” Alternatively, if too little preference is given to the individual, the AI may pack up its bags and announce, “this one’s yours to deal with, see ya!”

Russell summarizes that this value misalignment is a potential risk, but believes if we go about it a particular way, certain design templates may support provably beneficial systems. Although he does not believe we are ready yet for standards or regulations, economic incentives may work in our favor.

Russell closes the presentation by reciting some unanswered open questions for further reflection. Can we change the way AI defines itself? For instance, a civil engineer says, “I design bridges,” not “I design bridges that don’t fall down.” That bit about not falling down is already implied. Will solutions to these near-term control problems scale up to the long-term control problem of Global Artificial Super Intelligences like SkyNet from the Terminator movies? What about James Bond villains that reprogram an AI or build one without these protections? What about long-term enfeeblement or the slow-boiling frog problem as seen in E.M. Forster’s The Machine Stops?

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

5th Annual Governance of Emerging Technologies Conference: Part 2

What follows is Part 2 of a series of select summaries of conference presentations, each prepared by 2L Jesse James, at LSI‘s Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics, held at the Sandra Day O’Connor College of Law, Beus Center for Law & Society in Phoenix, AZ on May 17-19, 2017.

Day 1: Introductory Remarks

Gary Marchant, Regents’ Professor and Faculty Director of the Center for Law, Science & Innovation (LSI) at Arizona State University’s (ASU)  Sandra Day O’Connor College of Law and Brad Allenby, LSI Faculty Fellow, Distinguished Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability (ASU) and President’s Professor and Lincoln Professor of Engineering and Ethics, School of Sustainable Engineering and the Built Environment, Ira A. Fulton Schools of Engineering (ASU) kicked off the conference by discussing the implications of new technologies, focusing the audience’s attention on ways to ensure their positive inclusion and acceptance through many differing avenues of governance. They expertly set the stage at ASU’s fifth annual Conference on Governance of Emerging Technologies for speakers to delve into the depths of emerging technologies, analyze trends to predict future outcomes, and offer innovative new solutions.

Plenary 1

Moderated by LSI Faculty Fellow James Weinstein, the opening session on Emerging Technologies and Existential/Catastrophic Risks, gave speakers an opportunity to define catastrophic or existential risks, identify technologies that may engage these risks, and offered novel solutions to lower the risk to an optimal benefit to potential harm ratio.

Anthony Barrett of the Global Catastrophic Risk Institute (GCRI) presents “Modeling Pathways to Artificial Superintelligence (ASI) Catastrophe for Risk and Decision Analysis.” Defining catastrophic or existential risk as any activity that presents a very real possibility of obliterating or decimating the global human population, Barrett foresees an ASI (a self-aware machine surpassing the cognitive abilities of any human) offering many positive impacts on society, but also posing existential/catastrophic risks. After all, an ASI may by some arcane calculation conclude that all humans must be exterminated. But how does one evaluate such a risk? Barrett outlines a system.

Barrett explains it is difficult to direct ASI development in any particular way because the benefits and hazards are presently poorly characterized and quantified. He goes on to describe three fundamental problems that may be applicable to all catastrophic risks. First, the catastrophic and existential risks – major risks to humanity are difficult to conceptualize. One cannot adequately judge a risk if one does not appreciate the risk or comprehend its consequences. Second, the probabilities are especially uncertain because of a long time factor. For an ASI to be built and thereafter raise a fist to humanity, is a process that may take decades to centuries, where nearly anything could happen. Third, the directly applicable data is very limited. An ASI has not yet been built, and humanity itself has not seen catastrophes of the scale it could unleash. Thus, ASI development cannot seemingly be influenced or governed effectively.

However, Barrett’s organization, GCRI, seeks to change this. The entity’s leadership recognizes that the stakes are extremely high and believes existential threats merit attention despite uncertainties. In order to adequately assess risk, GCRI systematically breaks down big questions into manageable bits to be more easily manipulated. The big question at hand is what is the probability of a catastrophic scenario, and how can beneficial outcomes be realized?

GCRI applies a system for evaluating emerging technology risks. The organization constructs either fault tree models (working backward in time from a catastrophic point) using available information) or tech development models (working forward from the current year) to identify problems. From these catastrophe models catastrophe enabling triggers can be identified and the probabilities of dates or developments via direct or indirect tech models may be estimated.

Barrett presents a basic ASI scenario fault-tree to the audience:

  1. Seed AI is constructed with the ability to undergo recursive self-improvement
  2. Seed AI undergoes self improvement, resulting in building more intelligent AI
  3. Takeoff results in one (or more) ASI
  4. ASI gain decisive strategic advantage over humanity
  5. Ultimate outcome depends on ASI actions (extinction or not)

After completing a fault tree, the next steps are to update literature review and synthesis, gather more data, recruit elicitees with relevant expertise, test expert elicitation materials and questions, conduct and document elicitations; analyses; and forecasts. Then, the information can be used to engage stakeholders and the public to inform decisions. Some may be skeptical that resources should be spent on researching a problem that could be “far-fetched” or “far-away”, but Barrett closes strongly, noting that some things can be done now that will have significant risk reductions in the future, and that the value of strategic foresight is not in the predicting itself, but in informing and allowing policymakers to respond and anticipate.

Haydn Belfield of the Centre of the Study of Existential Risk, University of Cambridge spoke on disaster planning for geoengineering research. Belfield was part of a project sponsored by the Finnish government to produce a report outlining what the international community could do to reduce global or catastrophic risk. For the report, 107 identified recommendations were narrowed down to just 3, which Belfield argues passionately for: First, governance of geoengineering scenario planning research must be developed. Second, more research must be done in general. Third, there must be more funding to these critical areas.

Belfield articulates the difference between existential risks (total human extinction) and global catastrophic risk (10% of the world’s human population), and identifies some of the most pertinent. Asteroids are a major risk, as evidenced by the mass extinction event that killed the dinosaurs. The KT boundary, a dark line in the Earth’s geology which no dinosaur fossils are found above, serves as a grim reminder of the destructive power asteroids can wield. But existential risks need not be natural, they can also be man-made. Nuclear war and nuclear winter, for example, could easily render humanity extinct. Belfield mentions climate change could have a similar effect as well, but declines to speculate. In the last 10-20 years, 2 emerging technologies also pose extreme risks: AI development and synthetic biology.

In an attempt to discover how to govern emerging technologies to maximize benefits and minimize global existential risk, the Minister of Foreign Affairs of Finland tasked Belfield and her team to produce a report on what the international community can do to reduce risk. Holding different conferences, and scoring recommendations by how likely to be realized, how effective, and how practical the recommendations were, 107 recommendations were narrowed to 17. Over time, 17 were narrowed to six. Then from six, three present and important potential courses of action were identified: First, governance of geoengineering scenario planning research must be developed. Second, more research must be done. Third, more funding must be had.

These three recommendations dealt with catastrophic risk overall, but also shed an unexpected light on geoengineering research. Belfield believes current geoengineering research does not look upon itself with a lens for existential risk. Much of it appears to be conducted without evaluating potential existential consequences. Sulfite seeding to reduce temperature or solar radiation management projects may accomplish their primary goal of cooling or shielding the planet, but these projects may also have unexpected tertiary effects or externalities that have a serious existential impact. For instance, sulfite seeding may cool the earth, but could also create a snowball effect, drastically reducing global temperatures. Additionally, if sulfite seeding is abruptly halted, there could be the possibility of a termination shock, leading to unpredictable temperature spikes worldwide.

Belfield argues this kind of research should not be conducted without adequate consideration of possible externalities leading to existential events, and analogizes to other types of scenario disaster planning. Pandemics receive a desirable level of attention – there are scenarios considered for Pandemic response regionally, nationally, and internationally. When it comes to geoengineering, Belfield says, “We need more of this kind of tabletop exercise.”

Margaret Kosal of the Sam Nunn School of International Affairs, at Georgia Institute of technology speaks on emerging technology and existential security risks, specifically technology’s role in politics and war as well as the role of politics and war in technology. Kosal argues that technology and war are inextricably interconnected, but the mode of governance in place today can undermine our ability to protect militarily against possible future existential threats.

To Kosal, technology and war are connected. Technological innovation frequently comes from the desire to conquer one’s foes or defend against conquerors. But these new technologies have reached a point at which mankind has created its own existential risks, like nuclear weapons. Kosal appraises emerging technology in a search for what could raise new existential & catastrophic risks, specifically, what might displace or challenge nuclear weapons in terms of their international strategic position. She hints there may be other technologies that are widely proliferated (such as the AK-47) that just don’t have the same international political standing.

For Kosal, these new, emerging geopolitical drivers must be identified to avoid surprise, as new technologies may not be defeatable by conventional arms. Indeed, she quotes General Robert Crone saying “What keeps me awake at night is are we going to miss the next big technological advance? And perhaps an enemy will have [it].” She cautions, we may run into something new that’s “not your grandfather’s biological weapons.”

Kosal also cautions that it isn’t just state actors, but nonstate actors that are researching technologies that have the potential for weaponization. These aren’t necessarily terrorist organizations, either, but international or corporate organizations that are largely involved in researching technology that serves a dual purpose. For instance, there now exists a much more effective botulinum toxin. It wasn’t created for a warfighting purpose, but instead to improve botox for plastic surgery.

“The difference between beneficial and dangerous research is often only one of intent,” Kosal says, describing a technology that could compete with nuclear weapons on strategic stability currently being researched by a Chinese team. In trying to change the germ line in embryos, the team used a CRISPR-cas9 system to edit the HBB gene which encodes the human Beta-globin protein. Of 86 injected, 71 survived and 28 were successfully spliced. However, there were many “off-target” mutations. Kosal urges the audience to imagine a threat assessment matrix of a gene edit that reduces the birthrate of a population, and argues it challenges the position of nuclear weapons.

The mode of governance today presents a huge challenge with uncertain capabilities and international regime changes, Korsal adds. Poorly designed and reactionary limits are likely to be ineffective overall, and undermine security interests for the US and its allies. To paraphrase General Crone, If we don’t discover the next big technological weapon because of misguided research limitations, who may? Korsal’s solution is to foster pro-active international scientific cooperation as a means to encourage beneficial use of technology.

Catherine Rhodes, of the University of Cambridge, presents on why existential and catastrophic risks are a special case for governance, the problems encountered, the foremost challenge being that technology will be limited reactionarily. She suggests the main solution is to promote responsible research and scientific practice.

Current governance models present a challenge of dealing with long-term global issues, and setting aside short-term interests. There are strong economic drivers for technological development and limited resources available to international governance systems.  Rhodes invites the audience to imagine the consequences of biological technologies, such as infectious disease, directly caused by humans either unintended or deliberately, through DNA synthesis and gene editing. Currently, these can only be effectively addressed at an international level by arranging treaties or through collaborative governance.

Rhodes believes the governance solution is not in restricting the research or manufacture of these technologies, but instead in educating scientists in responsible research and promoting responsible practice.

Posted in Uncategorized | Tagged , , , , , , , , , , , , | Leave a comment

5th Annual Governance of Emerging Technologies Conference: Part 1

LSI‘s Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics was held at the Sandra Day O’Connor College of Law, Beus Center for Law & Society in Phoenix, AZ on May 17-19, 2017.

 What follows is a periodic series of summaries from the various conference keynote and plenary sessions, prepared by rising 2L student, Jesse James.

The first feature consists of “The Conference: the big picture” from a law student’s perspective.

2017’s Conference & The Elephant in the Room

By Jesse James

2L – Sandra Day O’Connor College of Law, Arizona Sate University

The LSI Governance of Emerging Technologies Conference (the “Conference”) is a venue for forward-thinking interdisciplinary academics, industry leaders, and agents to come together and explore governance challenges for emerging technologies, societal implications, and the pitfalls and dangers involved. 2017’s Conference covered a mass of dense material over a gigantic scope of subjects. Scores of speakers provided insight on a variety of topics from the responsible and safe development of artificial intelligence to the consequences of uberveillance on the human psyche. But overall, there was an acute focus on governance and the role law has to play in the future, an important theme being of law’s struggle to keep up with the ever increasing technological complexity of society. In terms of size and wonder to behold, the Conference itself is enormous.

The Conference began and ended by addressing the elephant in the room: mankind’s potential to destroy or enslave itself as made increasingly possible through more powerful technologies. The catastrophic risk posed to humanity by AI, gene editing as the new nuclear weapon, and the human rights consequences of an always-on, always-connected society were presented as potential dystopic futures, not so far off, that we must do our best to avoid. It was a theme touched on by the majority of speakers at the event, that while technological innovations may provide great benefits, the risks can make for a heavy calculus. What is done today matters, and a safe future must be secured.

Law has historically provided the safety net from these elephant-problems by plodding along slowly and self-correcting until everything has been solved. But, as many speakers stated, law continually plays catch-up and we may have need to rethink our modes of governance to come to more efficient solutions. The proper and appropriate modes of regulation may be quite different from those we currently employ. Speakers examined differences between European and American systems of patents for historical technologies, historical changes in governance reflexive to new technological developments, and potential for private certification schemes to supplement existing legal power. Offering innovative proposals such as these gets people thinking about solutions.

By educating one another on precise, specialized topics and issues, speakers at the 2017 Conference recreated a valuable and necessary modern-day performance of the ancient fable of the blind men and the elephant. In the fable, several blind men encounter an elephant, and each man touches and identifies a different part (a tusk, a tail, an ear, etc.). However, to conceptualize the whole of the elephant, the men must each share their individual interpretations, and synthesise them all together.  The Conference, likewise, is all about getting the bigger picture, from various areas or parts. Presentations on modern day letters of marque for private hacking, AI development, and big data are presentations on the tusks, tails, and ears of the increasingly complex and fast-moving society of which we are all part. Research cannot be specialized. The impacts that one technology may have upon another, the complete reshaping of our societal landscape is a problem invisible at the micro level. The big picture can only be seen by synthesis.

Problems cannot be solved unless the problems have been identified. The Conference provides a stage for people to expose these elephant-scale problems, and get people thinking about them. Whether or not law is suited to engage and govern new technologies is an important question, applicable only once one is aware of the potentialities and implications that exist. Then, after the blind men have identified the elephant the most important and fundamental question is “How does one eat an elephant?” The answer, of course, “One bite at a time.”


Posted in Uncategorized | Tagged , , , , , | Leave a comment

Second Annual Healthspan Extension Policy and Regulation Workshop

By Gary E. Marchant

The Center for Law, Science & Innovation convened its Second Annual Workshop on Healthspan Extension Policy and Regulation in Scottsdale Arizona on May 5, 2017. This invitation-only event was attended by approximately 30 experts in science, medicine, public policy, regulation, law and ethics. The purpose of this annual workshop series is to address the regulatory, legal, policy and ethical issues in bringing to the market interventions that can extend human lifespan and healthspan. David Ewing Duncan, author of books such as The Experimental Man and When I’m 164, kicked off the workshop with a discussion of big data in healthcare, and how artificial intelligence will be needed and will likely soon be available to utilize this data to inform us of environmental, dietary, lifestyle, and medical interventions that can extend the quality and quantity of human life.

Jan Vijg, a research scientist at the Albert Einstein College of Medicine, next presented the results of his recently published Nature study suggesting an upper limit of 115-120 years to the human lifespan, and presented data on the accumulation of genetic and epigenetic changes with aging, the cumulative effect of which likely limits the human lifespan. Gordon J. Lithgow, Research Director at The Buck Institute for Research on Aging, presented next on “The Pharmacology of Lifespan Extension,” in which he demonstrated that we can increase lifespan and healthspan in lab animals with drug like molecules, that there are hundreds of drug targets that have been identified and are available, but more work needs to be done to understand and ensure reproducibility of results and what factors may affect study outcomes.

Participants were then treated to a fascinating lunch time talk by Joon Yun, Palo Alto Investors, LLC, entitled “Longevity is Healthcare 3.0.” In a broad ranging talk, Dr. Yun argued that extending longevity and healthspan will become a central focus of our health care system going forward, and the key to achieving this goal will be in building and maintaining something he defined as “homeostatic capacity.”

The afternoon featured four presentations. First, Lewis Gruber, CEO of SIWA Corporation, provided an update on removing senescent cells as an anti-aging strategy, including presenting results of an anti-senescent cell monocolonal antibody his company is developing. Next, Tammy Phinney of Biogen presented on “Healthspan Extending Interventions: Perspectives From a Pharma Company,” identifying key challenges in the regulatory approval and commercialization of a drug product that extends human lifespan. Susan Brienza, a partner in the law firm of Ryley Carlock & Applewhite, reviewed the significant number of potential anti-aging nutraceuticals that are available, and how such products are regulated by the FDA and FTC. Finally, Gary Marchant, Faculty Director of the Center for Law, Science & Innovation, discussed seven different possible regulatory and commercialization pathways for healthspan and lifespan extension interventions.

Each of the presentations was followed by a robust roundtable discussion involving all the workshop participants. While some progress was made in exploring the regulatory and commercialization pathways for healthspan extension interventions, which will be summarized in a published journal article that a number of the participants committed to co-author, many issues are still outstanding and will be taken up by next year’s workshop. At the end of the meeting, the workshop participants reconvened for some life-enhancing food and spirits at the nearby Pig & Pickle restaurant – yummy!

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

NIH’s “All of Us” Beta Launch!

Image result for images genetics health

After months of discussion and reports on the NIH‘s nation-wide genomic “All of Us”* research program, the ground-breaking beta testing phase has begun.  To make sure the project is efficient & effective, the process will move gradually.  The program’s purpose is to promote health research and medical discoveries, via genomic testing, enabling personalized early disease detection, treatment, and appropriate follow-up care — all tailored to individual requirements.

Specifics about the beta launch can be found here.  The main question for many is the following, along with the answer provided by the NIH:

“How can I sign up to be a beta tester for the All of Us Research Program?

Thank you for your interest! We’ve been so encouraged by all the enthusiasm we’ve seen from people who want to sign up.

At this time, enrollment in this research program is by invitation only as we work to bring more than 100 sites on board. We’re ramping up our beta testing phase in stages; our partner sites are launching at different times, with exact dates subject to change, and we have limits on how many volunteers we can enroll to start.

If you receive care at one of our participating health care provider organizations, you may be contacted about the opportunity to enroll as a beta tester. We also will be doing beta testing with “direct volunteers” (participants who don’t receive care from participating health centers but want to enroll in this research program) in certain communities around the country.

The vast majority of our participants will join after our national launch, once we’ve worked out more of the kinks. Please subscribe for our email updates at https://www.joinallofus.org/, and we’ll let you know when open enrollment begins and how you can sign up then.”

*Previously known as the “Personalized Medicine Initiative”

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

Congratulations – Student Poster Winners

One of the highlights of the annual Governance of Emerging Technologies Conference is the student poster session, when students from any academic institution are invited to present a poster related to a theme of the conference.  This year, the top two posters, selected by the conference committee, were awarded a cash prize of $500 for first place and $250 for second place.

We are very pleased to announce that a Sandra Day O’Connor College Of Law student took second place!  Below is a photo of 3L Lucille “Lucy” Tournas and her poster titled Brain Computer Interface (BCI) for Human Enhancement.

One of the “hot” areas, creating a lot of buzz  in the law, technology and policy space is synthetic enhancement.  Lucy’s poster highlights the pros & cons of artificially augmented human intelligence, including the applicable technology, legal, ethical & technical obstacles, risks and regulatory oversight.

Paige Williams, an ASU Biomedical Science major took home first place.

~ Congratulations Paige and Lucy ~

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Stevens & Hartley Present: Jurors, Genomics and Tort Litigation Webinar

On May 15, 2017 LSI Faculty Fellow, Yvonne Stevens and LSI Executive Council Member, Kirk Hartley, among others, presented to a national audience on the implication of genetics — and issues particular to judges and juries — in tort litigation.  Audio CDs from the event, along with presentation materials, are available for purchase here.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

You don’t WannaCry that you missed Cyber Day 5-19-17

The link to register for the “Special Session on Cybersecurity” is http://asulawcle.com/get   

CLE $100 (3.5 CLE credits)

Non-CLE $50

The link to register for the “Special Session on Cybersecurity” is http://asulawcle.com/get   

CLE $100 (3.5 CLE credits)

Non-CLE $50


Posted in Uncategorized | Tagged , , , , | Leave a comment