Autocomplete Google Search is often a source of amusement that surfaces funny and wild queries. However, beyond the memes it inspires, the feature can sometimes surface inappropriate predictions, with the company planning updates in the coming weeks to reduce the possibility of it occurring.
Predictions that appear in Google — on the web, iOS, Android, as well as Chrome’s Omnibox — range from individual words to whole phrases. Google notes that it reduces typing by approximate 25% and is especially useful on mobile devices given the smaller screen real estate.
Autocomplete is designed to help people complete a search they were intending to do, not to suggest new types of searches to be performed. These are our best predictions of the query you were likely to continue entering.
It operates by examining real searches to surface common and trending queries that are relevant to yours, while factoring location and previous search history.
Existing autocomplete policies work to remove predictions that are sexually explicit (not related to medical, scientific, or sex education topics), hateful (against groups and individuals on the basis of race, religion or several other demographics), violent, dangerous, and harmful.
Other reasons include spam, piracy, and valid legal queries, with Google noting that the “guiding principle” is “that autocomplete should not shock users with unexpected or unwanted predictions.”
Last year, Google launched a feedback tool for users to flag offending results for review and in the coming weeks is expanding polices related to hate and violence. With the latter, any predictions that “advocate, glorify or trivialize violence and atrocities, or which disparage victims” will be removed.
Meanwhile, any predictions “reasonably perceived as hateful or prejudiced toward individuals and groups, without particular demographics” will not be allowed as part of the expanded hate criteria.
Check out 9to5Google on YouTube for more news:
FTC: We use income earning auto affiliate links. More.
Comments