Skip to main content

Alphabet and Google CEO Sundar Pichai calls for AI regulation

In his first big public move since being appointed the CEO of Alphabet last month, Sundar Pichai today called for AI regulation to govern how the promising new technology is leveraged.

Picahi published an opinion in the UK’s Financial Times today bluntly noting how “artificial intelligence needs to be regulated.” In his view, companies cannot just build technology and “let market forces decide how it will be used.”

Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it.

Pointing out that “history is full of examples of how technology’s virtues aren’t guaranteed,” the executive cites how the internal combustion engine both expanded travel and caused more accidents. More recently, the internet’s reach was cited as making it easier for misinformation to proliferate.

In terms of achieving this, Pichai lays out some starting points and guidelines. This includes how “international alignment will be critical to making global standards work,” and pointing to Europe’s GDPR as a “strong foundation.”

Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways. Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities.

Diving into specific examples, he notes how existing medical frameworks are “good starting points” for devices like AI-assisted heart monitors. Meanwhile, self-driving cars require governments to “establish appropriate new rules that consider all relevant costs and benefits.” The former area is something that Google Health and Alphabet’s Verily are actively working on, while the latter has Waymo already operating a commercial ride service in Phoenix.

The CEO makes reference to Google’s AI Principles introduced in 2018 following heavy internal criticism about the Cloud division’s military work on recognizing drone footage. Applied company-wide, they “specify areas where we will not design or deploy.”

Google wants to be a “helpful and engaged partner to regulators” by offering “expertise, experience and tools as we navigate these issues together.” Sundar Pichai starts and ends the regulation editorial by noting the immense promise of AI:

AI has the potential to improve billions of lives, and the biggest risk may be failing to do so. By ensuring it is developed responsibly in a way that benefits everyone, we can inspire future generations to believe in the power of technology as much as I do.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel



Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: