Visit our website
New America Cypbersecurity Initiative
New America Cypbersecurity Initiative
MIT Technology Review
MIT Technology Review
io9
io9
Techdirt
Techdirt
Knowledge@Wharton
Knowledge@Wharton
Bioscience Technology
Bioscience Technology
redOrbit
redOrbit
Technology & Marketing Law Blog
Technology & Marketing Law Blog
Popular Science Blog
Popular Science Blog
Pew Research Center
Pew Research Center
Genomics Law Report
Genomics Law Report
Science 2.0
Science 2.0
The Guardian Headquarters
The Guardian Headquarters
Genetic Literacy Project
Genetic Literacy Project
Disclaimer

Statements posted on this blog represent the views of individual authors and do not necessarily represent the views of the Center for Law Science & Innovation (which does not take positions on policy issues) or of the Sandra Day O'Connor College of Law or Arizona State University.

OECD AI Policy Observatory’s Framework for Trustworthy AI

In an effort to help AI practitioners determine which tool is best for their case, the OECD.AI Policy Observatory has released a set of principles for trustworthy AI. When users apply this framework, the “submissions will form a live database with interactive features and information with the latest tools.” OECD.AI’s recent blog “What are the tools for implementing trustworthy AI? A comparative framework and database” mentions Arizona State University’s work in developing the framework for implementing trustworthy AI. 

The framework has undergone several versions of expert testing and validation. Relevant research from OECD partners Global Partnership for AI, the Open Community for Ethics in Autonomous and Intelligent systems & Arizona State University backup the framework for identifying tools. The framework identifies tools according to AI system characteristics and contexts, and provides means for identifying the types of tools that are best suited for specific contexts.

The framework for comparing trustworthy AI tools: 

  1. Description: This includes the name, background information and hyperlinks to additional information.
  2. Origin: This covers the organization, stakeholder group and geographical region from which the tool originates, as well as the first date of publication..
  3. Categorization: This information covers the type of approach and the type of tool.
  4. Scope: This covers details about platform specificity; target users, policy areas, stakeholder groups; geographical scope; impacted stakeholders; and AI system lifecycle stage(s) covered.
  5. Alignment with international AI Principles: Including the tool’s relevance to the OECD AI Principles and the European Commission’s key requirements for trustworthy AI.
  6. Adoption potential: Here we list the maturity of the tool/approach and the degree to which it is kept up to date. It lists the required resources and legal conditions for using it, and the stakeholders who need to be involved in using it.
  7. Implementation incentives: Here, we enumerate the benefits users can expect to achieve from using the tool/approach and possible enforcement mechanisms that may facilitate the implementation.
 

Learn more about the OECD framework here.  

 

 

Share on facebook
Share on email
Share on twitter
Share on linkedin