Skip to main content

Google says it will use mix of AI and human beings to identify & remove extremist YouTube videos

Google has pledged to use a mix of artificial intelligence and human beings to identify and remove extremist videos from YouTube.

The move was prompted when a failure in YouTube’s existing filters saw ads from governments and major brands appear within and alongside hate videos. These included videos of former Ku Klux Klan official and holocaust denier David Duke, as well as Steven Anderson, a preacher banned from Britain after praising the terrorist attack on a gay nightclub in Orlando …

YouTube initially announced new policies and controls when advertisers boycotted the platform. These policies were announced earlier this month, and Google has now revealed the steps it is taking to enforce them.

Google said that it had used the terrorism-related content it had removed to help train AI systems.

We have used video analysis models to find and assess more than 50 per cent of the terrorism-related content we have removed over the past six months. We will now devote more engineering resources to apply our most advanced machine learning research to train new “content classifiers” to help us more quickly identify and remove extremist and terrorism-related content.

But because context is important – the same footage could have a different meaning in a BBC news report and an ISIS propaganda video – it would also be boosting its ‘trusted flagger’ program. These are independent non-governmental organizations with expertise in fields like anti-terrorism and hate speech.

Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 per cent of the time and help us scale our efforts and identify emerging areas of concern. We will expand this programme by adding 50 expert NGOs to the 63 organisations who are already part of the programme, and we will support them with operational grants.

The company says that it will also be taking a tougher line on borderline videos, ensuring that when they are allowed to remain on the service they cannot be monetized, recommended or commented on – and will appear behind warnings.

That means these videos will have less engagement and be harder to find. We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.

You can read full details on Google’s blog.

Photo: Reuters/Dado Ruvic


Check out 9to5Google on YouTube for more news!

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel