Skip to main content

Google details formal review process for enforcing AI Principles, plans external advisory group

Back in June, Google released AI Principles — in response to Project Maven backlash — that codified how artificial intelligence would be used in research and products going forward. The company is now detailing additional initiatives and processes that it implemented to ensure that all guidelines are enforced.

Over the past six months, Google has encouraged teams throughout the company to “consider how and whether our AI Principles affect their projects.” A new training aimed at both technical and non-technical employees hopes to “address the multifaceted ethical issues that arise in their work.”

It’s based on the “Ethics in Technology Practice” developed at Santa Clara University, and further tailored for the AI Principles. Over a hundred employees from around the world have tried the course, with Google hoping to make it more accessible in the future.

Google has also invited external experts as part of an AI Ethics Speaker Series to cover topics like bias in natural language processing and AI in criminal justice. For everyone, the Machine Learning Crash Course this year added a technical module on fairness that focusses on identifying and mitigating bias in training data.

Beyond these employee resources, Google has established a formal review structure to assess new projects, products, and deals. There are three core groups, though the company did not publicly name leadership or members.

  • A responsible innovation team that handles day-to-day operations and initial assessments. This group includes user researchers, social scientists, ethicists, human rights specialists, policy and privacy advisors, and legal experts on both a full- and part-time basis, which allows for diversity and inclusion of perspectives and disciplines.
  • A group of senior experts from a range of disciplines across Alphabet who provide technological, functional, and application expertise.
  • A council of senior executives to handle the most complex and difficult issues, including decisions that affect multiple products and technologies.

In practice, more than 100 reviews have been conducted to assess “scale, severity, and likelihood of best- and worst-case scenarios for each product and deal.” For example, some projects have been modified to “clearly outline assistive benefits as well as model limitations that minimize the potential for misuse.”

Others, like a general-purpose facial recognition API, have been put on hold. Google last week publicized this decision and notes a “small number of product use-cases” where a similar pause was taken until “important technology and policy questions” have been worked through.

Moving forward, Google also plans to create an external advisory group of experts from multiple fields to “complement the internal governance and processes.”


Check out 9to5Google on YouTube for more news:

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: abner@9to5g.com

Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications