YouTube Has A Master Plan To Target And Eradicate Online Terrorism, Extremist Videos

Terrorist organizations like ISIS often use social media services to deliver messages of hate and to recruit new members. To counter this, Google and its YouTube division have worked for years identifying videos that fall outside of the confines of YouTube's terms of service, but in acknowledging an "uncomfortable truth," Google said that "more needs to be done. Now." Following up on that sentiment, Google outlined a number of steps it is taking to identify and remove violent extremist videos.

The effort starts with Google's underlying technology and presumably its fancy algorithms. Google vowed to increase its use of technology to identify extremists and terrorism-related videos. More specifically, Google will build upon its video analysis models that are already in use by employing machine learning research to train new content classifiers. Machine learning and artificial intelligence have been major areas of focus, both within Google and the technology sector as a whole, and sifting through YouTube videos to identify and remove extremists and terrorism-related videos is one area where those efforts will pay dividends.

YouTube

At the same time, Google acknowledged that "technology alone is not a silver bullet." Humans are still better at analyzing video content, so Google plans to "greatly increase" how many independent experts it employs in YouTube's Trusted Flagger program.

"Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 per cent of the time and help us scale our efforts and identify emerging areas of concern," Google stated in a blog post. "We will expand this program by adding 50 expert NGOs to the 63 organizations who are already part of the program, and we will support them with operational grants."

That is a big investment on Google's part. In addition to beefing up its use of technology and hiring more flesh and blood workers, Google said it will take a tougher stance on videos that are not in clear violation of its policies. Google provided the example of videos containing inflammatory religious or supremacist content. To deal with these types of videos, YouTube will slap an interstitial warning. These videos will also be ineligible for monetary compensation and will not receive recommendations, comments, or user endorsements. The idea is to scale back the level of engagement, which in turn will make these types of videos harder to find.

"We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints," Google said.

Lastly, YouTube will expand its counter-radicalization efforts. Part of that will entail building upon its Creators for Change program that promotes YouTube voices against hate and radicalization. This is a play on Google's targeted advertising technology, which in this case will be used to push anti-terrorism videos to people it has identified as being potential ISIS recruits.

"Collectively, these changes will make a difference. And we’ll keep working on the problem until we get the balance right," Google said.