YouTube argues its getting better at removing hate speech

July 2024 · 7 minute read

YouTube released data on Tuesday arguing that it is getting better at spotting and removing videos that break its rules against disinformation, hate speech and other banned content.

The Google-owned video service said 0.16 percent to 0.18 percent of all the video views on its platform during the fourth quarter of 2020 were on content that broke its rules. That’s down 70 percent from the same period in 2017, the year the company began tracking it.

But because of the immense scale of YouTube — more than 1 billion hours of video are watched on the site every day — that still amounts to potentially millions of views. The metric relies on a sample of videos the company says is broadly representative but doesn’t account for all the content posted to the platform.

The numbers underline a core issue facing YouTube and other social networks: how to keep their platforms open and growing while minimizing harmful content that might trigger harsher scrutiny from governments already keen to regulate them.

Advertisement

“My top priority, YouTube’s top priority, is living up to our responsibility as a global platform. And this is one of the most salient metrics in that bucket,” said Neal Mohan, YouTube’s chief product officer and a longtime Google executive known for increasing the company’s ad business.

In the past year, YouTube has come under fire for harboring misinformation about covid-19, facilitating the spread of baseless claims that the 2020 presidential election was rigged, and allowing white supremacists to post racist videos. YouTube is a major revenue driver for Google, bringing in more than $6.8 billion in the last quarter of 2020 alone.

Sen. Warner to unveil bill reining in Section 230, seeking to help users fight back against real-world harm

The company says it has taken action, removing anti-vaccine content and coronavirus misinformation under its policy against medical misinformation, purging the site of videos related to the QAnon extremist ideology, and banning President Donald Trump’s account after the Jan. 6 Capitol riot. Trump’s account remains banned.

Advertisement

It wasn’t long ago that social networks such as Facebook and YouTube denied that they were even part of the problem. After Trump’s election in 2016, Facebook chief executive Mark Zuckerberg rejected the idea that his site had a notable impact on the result. For years, YouTube prioritized getting people to watch more videos above all else, and ignored warnings from employees that it was spreading dangerous misinformation by recommending it to new users, Bloomberg News reported in 2019.

In the years since, as scrutiny from lawmakers intensified and employees of YouTube, Facebook and other major social networks began questioning their own executives, the companies have taken a more active role in policing their platforms. Facebook and YouTube have both hired thousands of new moderators to review and take down posts. The companies have also invested more in artificial intelligence that scans each post and video, automatically blocking content that has already been categorized as breaking the rules.

At YouTube, AI takes down 94 percent of rule-breaking videos before anyone sees them, the company says.

Advertisement

Democratic lawmakers say the company still isn’t doing enough. They have floated numerous proposals to change a decades-old law known as Section 230 to make Internet companies more liable for hate speech posted on their platforms. Republicans want to change the law too, but with the stated goal of making it harder for social media companies to ban certain accounts. The unproven idea that Big Tech is biased against conservatives is popular with Republican voters.

https://www.washingtonpost.com/technology/2021/04/02/capitol-siege-arrests-technology-fbi-privacy/

Researchers who study extremism and online disinformation say there are still concrete steps that YouTube could take to further reduce disinformation. Companies could work together more closely to identify and take down rule-breaking content that pops up on multiple platforms, said Katie Paul, director of the Tech Transparency Project, a research group that has produced reports on how extremists use social media.

“That is an issue we haven’t seen the platforms work together to deal with yet,” Paul said.

Advertisement

Platforms could also be more aggressive in banning repeat offenders, even if they have huge audiences.

When YouTube and other social networks took down Trump’s accounts, false claims of election fraud fell overall, according to San Francisco-based analytics firm Zignal Labs. Just a handful of “repeat spreaders” — accounts that posted disinformation often and to large audiences — were responsible for much of the election-related disinformation posted to social media, according to a report from a group that included researchers from the University of Washington and Stanford University.

In the days after the Capitol riot, YouTube did ban one such repeat spreader — former Trump adviser Stephen K. Bannon. The YouTube page for Bannon’s “War Room” podcast was taken down after another Trump ally, Rudolph W. Giuliani, made false claims about election fraud on a video posted to the channel. Bannon had multiple strikes under YouTube’s moderation system.

Advertisement

“One of the things that I can say for sure is the removal of Steve Bannon’s ‘War Room’ has made a difference around the coronavirus talk, especially the talk around covid as a bioweapon,” said Joan Donovan, a disinformation and extremism researcher at Harvard University.

YouTube suspends Rudy Giuliani from its ad revenue sharing program

YouTube is invaluable to figures such as Bannon who are trying to reach the biggest audience they can, Donovan said. “They can still make a website and make those claims, but the cost of reaching people is exorbitant; it’s almost prohibitive to do it without YouTube,” she said.

YouTube’s Mohan said the company doesn’t target specific accounts, but rather evaluates each video separately. If an account repeatedly uploads videos that break the rules, it faces an escalating set of restrictions, including temporary bans and removal from the program that gives video makers a cut of advertising money. Three strikes within a 90-day period results in a permanent ban.

Advertisement

“We don’t discriminate based on who the speaker is; we really do focus on the content itself,” Mohan said. Unlike Facebook and Twitter, the rules don’t make an exception for major world leaders, he said.

Mohan also emphasized the work that the company has done in reducing the spread of what it calls “borderline” content — videos that don’t break specific rules but are close to doing so. Previous versions of YouTube’s algorithms may have boosted those videos because of how popular they were, but that has changed, the company says. It also promotes content from “authoritative” sources — such as mainstream news organizations and government agencies — when people search for hot-button topics such as covid-19.

Facebook’s Sandberg deflected blame for Capitol riot, but new evidence shows how platform played role

“We don’t want YouTube to be a platform that can lead to real-world harm in an egregious way,” Mohan said. The company is constantly seeking input from researchers and civil rights leaders to decide how it should design and enforce its policies, he said. That process is global, too. In India, for example, the interpretation of anti-hate policies may be more focused on caste discrimination, whereas moderators in the United States and Europe will be more attuned to looked for white supremacy, Mohan said.

Advertisement

Most of the content on YouTube isn’t borderline and doesn’t break the rules, Mohan said. “We’re having this conversation around something like the violative view rate, which is 0.16 percent of the views on the platform. Well, what about the remaining 99.8 percent of the views that are there?”

Those billions of views represent people freely sharing and viewing content without traditional gatekeepers such as TV networks or news organizations, Mohan said. “Now they can share their ideas or creativity with the world and get to an audience that they probably wouldn’t have even imagined they could have gotten to.”

Still, even if the metric is accurate, that same openness and immense scale means content that could have real-world harm remains a reality on YouTube.

“You see the same kind of problems with moderating at scale on YouTube like you do on Facebook,” said Paul, the disinformation researcher. “The issue is there’s such a vast amount of content.”

ncG1vNJzZmivp6x7uK3SoaCnn6Sku7G70q1lnKedZMGmr8enpqWnl658c3yRamZpbF9lg3DFzq6rrpqVYsOqsMSoZJuZnmK6psDRoppo