Over the weekend, Google presented a white paper at the Munich Security Conference detailing how it fights disinformation across its largest services. This includes efforts covering Google Search, News, and YouTube, as well as advertising platforms.

Disinformation in recent years goes by many names, including “fake news” and the “post-truth” era. Google characterizes misinformation as “deliberate efforts to deceive and mislead using the speed, scale, and technologies of the open web,” and something that undermines its mission of organizing the world’s information.

The entities that engage in disinformation have a diverse set of goals. Some are financially motivated, engaging in disinformation activities for the purpose of turning a profit. Others are politically motivated, engaging in disinformation to foster specific viewpoints among a population, to exert influence over political processes, or for the sole purpose of polarizing and fracturing societies. Others engage in disinformation for their own entertainment, which often involves bullying, and they are commonly referred to as “trolls”.

At a high-level, the company’s efforts are comprised of three strategies tailored to suit each product.

Make Quality Count

Google organizes and surfaces content using “ranking algorithms” that do “not [foster] the ideological viewpoints of the individuals that build or audit them.” In Search, this is measured by many human Search Quality Raters around the world that adhere to guidelines.

Search Quality Raters Guidelines

Counteract Malicious Actors

This includes content creators that try to deceive ranking systems to get more visibility, with each service outlining what that entails: “misrepresentation of one’s ownership or primary purpose on Google News and our advertising products, or impersonation of other channels or individuals on YouTube.” Google notes how its 20 years of experience in combating spam can be applied to tackling disinformation.

Give Users More Context

As evident by Google News last year, the company believes providing a “diverse set of perspectives [is] key to providing users with the information they need to form their own views.” This includes Knowledge and Information Panels in Search and YouTube, as well as the increased use of fact checkers on News. In ads, this includes telling users why they are seeing a particular one and disclosing what parties are behind election advertising.

Google also touts its commitment to supporting quality journalism with the News Initiative (GNI), and by working with outside experts and researchers. Looking forward, media literacy could aid in helping people recognize disinformation, while protecting democratic elections is also an important aspect. Another future threat is “deep fakes” or “synthetic media” generated by AI.

The field of synthetic media is fast-moving and it is hard to predict what might happen in the near future. To help prepare for this issue, Google and YouTube are investing in research to understand how AI might help detect such synthetic content as it emerges, working with leading experts in this field from around the world.

The full white paper is worth a read and covers what steps Google is taking in its four key products.


Check out 9to5Google on YouTube for more news:

About the Author