Google has long had to tackle the attempts of those trying to “game” search with low-quality content, with “fake news” being the modern-day extension of that. Today, the company is announcing “structural changes” to Search that address the “spread of blatantly misleading, low quality, offensive or downright false information.”
In the midst of hundreds of billions of indexed pages, .25 percent of daily queries return “offensive or clearly misleading content.” Noting that this problem is different from past issues, the company’s goal remains the same:
While this problem is different from issues in the past, our goal remains the same—to provide people with access to relevant information from the most reliable sources available. And while we may not always get it right, we’re making good progress in tackling the problem.
Google is using human evaluators to assess the quality of search results for that subset of problematic queries. This feedback is used to get data on the quality of results and find areas that need improvement, but do not directly determine individual page rankings.
With more detailed Search Quality Rater Guidelines for low-quality pages (misleading information, unexpected offensive results, hoaxes and unsupported conspiracy theories), algorithms will begin demoting low-quality content.
The second change relates to ranking. Signals used to determine which results are shown have been adjusted to help surface more “authoritative pages and demote low-quality content, so that issues similar to the Holocaust denial results that we saw back in December are less likely to appear.”
Additionally, Google is launching direct feedback tools that will allow users to flag erroneous Featured Snippets and Autocomplete predictions. Pressing “Report inapproriate predictions” and “Feedback” will surface dialogue with clearly labeled categories. Also featuring a comments sections, this feedback will be used to help improve algorithms.
The last change has Google providing greater transparency into products to address earlier questions about why “shocking or offensive predictions were appearing in Autocomplete” and on the Assistant on Google Home.