Visit our website
New America Cypbersecurity Initiative
New America Cypbersecurity Initiative
MIT Technology Review
MIT Technology Review
io9
io9
Techdirt
Techdirt
Knowledge@Wharton
Knowledge@Wharton
Bioscience Technology
Bioscience Technology
redOrbit
redOrbit
Technology & Marketing Law Blog
Technology & Marketing Law Blog
Popular Science Blog
Popular Science Blog
Pew Research Center
Pew Research Center
Genomics Law Report
Genomics Law Report
Science 2.0
Science 2.0
The Guardian Headquarters
The Guardian Headquarters
Genetic Literacy Project
Genetic Literacy Project
Disclaimer

Statements posted on this blog represent the views of individual authors and do not necessarily represent the views of the Center for Law Science & Innovation (which does not take positions on policy issues) or of the Sandra Day O'Connor College of Law or Arizona State University.

A Nobel Laureate Speaks – AI: the great possibilities and the great dangers

“We are They.  They are Us” – Frank Wilczek

An interesting comment (embedded in the above photograph), made at some point in time, by prominent theoretical physicist and Nobel Prize Winner (Physics), Frank Wilczek, who, speaking to a meeting of the Sandra Day O’Connor College of Law‘s Center for Law, Science & Innovation study group emphasizes that when it comes to Artificial Intelligence (AI), we can’t afford to make mistakes.

 
Wilczek points out that AI is a different animal from other current and past threats.  Wilczek, a very bright, reasonable and soft-spoken man, is worried about what he calls “malignant AI,” in particular militarized AI – including the future possibility of private AI armies taking over the world and such.  Yes, military technology will continue to advance, bestowing great benefits and, at the same time, creating the possibility for things to go wrong. Very wrong.

 
Wilczek suggests a handful of options directed toward preventing an AI catastrophe. These are: establishing international AI standards, civilian review of AI military programs, setting mandatory weaknesses to curtail AI’s potential and global openness — in other words, no secret research.  Of these possibilities and in the scheme of things, Wilczek finds the concept of “openness” the most promising in terms of both its feasibility and effectiveness.  Wilczek optimistically sees the greater, cohesive good prevailing over a few bad apples, keeping the latter in check, in the quest for and realization of AI openness.  Wilczek likens openness with stability and compares AI openness to computer security openness (an oxymoron?) which he believes has worked relatively well despite a few glitches.  As he says, “most people are good” and want to prevent bad things from happening.

 
One of the attendees makes an important distinction, namely the use of AI for our own purposes vis-a-vis AI developing its own purposes or “passions.”  In terms of using AI for our own purposes, the question is raised on how to effectively develop ethical military systems or software? In the same vein, someone makes the point that once AI evolves to become truly autonomous and develops its own passions, its essence and being may go beyond what we, as humans, can conceive of or plan for.   With regard to such self-determining AI, Wilczek is of the view that its realization is still relatively far into the future.  Therefore, our current fears should relate to AI systems that increasingly remove humans from the decision-making loop, potentially resulting unethical, harmful or even catastrophic actions.

 
At the end of the day, the group seems to agree that we will continue to yearn for and develop smart technology but we will want to have control over that technology.  As Wilczek says, human control is our safety net — our best protection against AI getting out of hand and the “unknown unknowns” becoming reality.

 
This meeting was the second session of the Center’s study group composed of lawyers, scientists, engineers, businesspeople and students. This group meets multiple times during the spring semester for a dinner discussion on a set of topics relating to a specific theme. This spring’s theme is on doomsday predictions, and upcoming sessions will focus on environmental catastrophism, technological unemployment and Thomas Piketty’s predictions about growing wealth inequality.

photo(3) photo 1(1) photo 3(1) photo 2 photo 4 photo 5photo(4)