YouTube Said It Was Getting Serious About Hate Speech. Why Is It Still Full Of Extremists?

YouTube Said It Was Getting Serious About Hate Speech. Why Is It Still Full Of Extremists?

Last month, YouTube announced a site-wide change to its hate speech policy, saying it would no longer tolerate videos promoting Nazism, white supremacy or any other content “alleging that a group is superior in order to justify discrimination, segregation or exclusion” of others due to qualities like race, gender, or sexual orientation.

At the time, the New York Times reported that the company pledged to remove thousands of videos falling under this expanded definition.

More than six weeks later, however, it remains disturbingly easy to find channels associated with hate groups on the platform. Strangely, this isn’t a simple oversight by YouTube’s parent company, Google. In fact, it’s the policy working as planned.

YouTube hosts more than 23 million channels, making it impossible to identify each and every one that is involved with the hate movement — especially since one person’s unacceptable hate speech is another person’s reasonable argument. With that in mind, we used lists of organisations promoting hate from the Southern Poverty Law Center, Hope Not Hate, the Canadian Anti-Hate Network, and the Counter Extremism Project, in addition to channels recommended on the white supremacist forum Stormfront, to create a compendium of 226 extremist YouTube channels earlier this year.

While less than scientific (and suffering from a definite selection bias), this list of channels provided a hazy window to watch what YouTube’s promises to counteract hate looked like in practice. And since June 5, just 31 channels from our list of more than 200 have been terminated for hate speech. (Eight others were either banned before this date or went offline for unspecified reasons.)

Before publishing this story, we shared our list with Google, which told us almost 60 per cent of the channels on it have had at least one video removed, with more than 3000 individual videos removed from them in total. The company also emphasised it was still ramping up enforcement. These numbers, however, suggest YouTube is aware of many of the hate speech issues concerning the remaining 187 channels — and has allowed them to stay active.

The rules of hate

To understand why these channels continue to operate, it’s important to know how YouTube polices its platform. YouTube’s enforcement actions are largely confined to what happens directly on its website. There are some exceptions — like when a channel’s content is connected to outside criminality — but YouTube generally doesn’t consider the external behaviour of a group or individual behind an account. It just determines whether a specific video violated a specific policy.

Heidi Beirich, who runs the Southern Poverty Law Center’s Intelligence Project, charges that YouTube’s approach puts it far behind peers like Facebook, which takes a more holistic view of who is allowed to post on its site, prohibiting hate groups and their leaders from using the social network.

“Because YouTube only deals with the content posted, it allows serious white supremacists like Richard Spencer and David Duke to keep content up,” Beirich said. “In general, our feeling is that YouTube has got to get serious about removing radicalizing materials given the impact these videos can have on young, white men.”

YouTube’s limited approach to hate has resulted in some confusing decisions. For example, YouTube recently banned the channel for Fash The Nation, a podcast on the white nationalist The Right Stuff network, but did not ban the channel for The Right Stuff network itself.

Similarly, Buzzfeed reports that the channel of Austrian ethno-nationalist Martin Sellner (who is being investigated for possible connections to the Christchurch shooter) has had his YouTube channel demonetized, but his videos are still accessible on the site, as are at least seven other channels from the Generation Identity group he organizes for.

The apparent inconsistencies go on: The channel of South African neo-Nazi group AWB was terminated. Two others dedicated to violent Greek neo-Nazi party Golden Dawn remain active.

The channel of white nationalist group American Identity Movement, famous for distributing fliers on college campuses, is still up. As is a channel for the white nationalist group VDARE. And, notably, none of the 33 channels on our list run by organisations designated by the Southern Poverty Law Center as anti-LGBTQ hate groups have been removed from the platform.

In addition to giving many hateful channels a pass, this agnosticism to uploaders’ motives means that some channels with no interest in promoting white supremacy have been punished as YouTube enforces its policies.

The channel of News2Share, an independent journalism outlet that frequently covers far-right rallies, was demonetised on June 5, and remains so following significant news coverage.

The channel of Scott Allsop, a Romania-based history teacher, was briefly banned that same day for hosting archival Nazi propaganda videos as an educational resource. Following a public outcry, Allsop’s channel was restored, but his subsequent contact with Google representatives did little to bolster his confidence in the company.

Allsop told Gizmodo that YouTube asked him for other history teachers’ channels so it could review them “ASAP.” But one he says one channel he mentioned, which belongs to history teacher Johnny Hemphill, had two videos featuring historical footage from Nazi Germany removed and received a strike for posting hate speech. Elsewhere, multiple Stormfront-recommended channels that host historical Nazi content continue to operate, even though their channel names and avatars openly support anti-Semitism or neo-Nazism.

Gaming the system

Google representatives told Gizmodo that much of what seems from the outside as inconsistent or incomplete enforcement is the result of a deliberate process they believe will make the YouTube community healthier over the long term.

Fundamentally, they explained, the most important change to the platform was a more expansive definition of what constitutes hate speech, advocating for inferiority, or exclusion, of a protected group. There were a number of channels YouTube decided were exclusively engaged in this sort of activity with no other redeeming qualities. Those channels were terminated immediately once the new policy went live.

Other channels found to have violated the new hate speech rules, but which also had some content that didn’t violate the new policy, now fall under YouTube’s strike system. The first time a channel is found to be in violation, it gets a warning. The second time, a one-week suspension from uploading new content. The third time, the suspension lasts two weeks. The fourth time, it’s terminated.

Strikes two through four reset every 90 days, meaning if a channel is found to be preaching anti-Semitism or white nationalism in a video, comment, story, or playlist, its creator can lay low for a while to reset their counter. The strikes also aren’t retroactive. If a channel was found to have openly advocated for white nationalism in content uploaded prior to this policy change, some of those individual videos may be deleted, but won’t count toward their overall strikes.

The intent of this system is to encourage users to learn the rules and self-police their own content. But it means that terminating the channels of insistent repeat violators would take, at a minimum, almost a month – and likely far longer than that in most instances.

The policy as described to us was consistent with what we observed happening to our list of hateful channels. Nearly all of the terminations occurred in an initial flurry, with 24 channels going offline within the first couple days. A week later, only a single channel from the list disappeared. Three weeks after that, six more were banned.

It was in that last batch that the channel for White Date, a “Europids”-only dating site that believes every white baby “is an act of resistance,” finally got the axe.

This system also provides a roadmap for precisely how much bile someone can spew on YouTube before the company blasts a channel from the site.

David Duke, former KKK grand wizard and the most famous name in American white supremacy for decades, is allowed to maintain a channel with content like “Israel: The Promised Land for Organised Crime” because he has content that isn’t primarily involved in hate speech, like a recent video entitled “Dr Duke Bad Sleep Shrinks Your Testicles Muscle and Brain.”

Demonetising whom?

Our efforts to understand a platform as enormous as YouTube through a few hundred channels amount to something like shining a flashlight on a football field. However, our findings track with the observations of Becca Lewis, a researcher at Data & Society, who traced the tendrils of YouTube’s far-right ecosystem by looking at the participants in a live debate between white nationalist Richard Spencer and an anti-feminist Gamergate personality known as Sargon of Akkad.

By observing who collaborated with these personalities, Lewis was able to map out a network of “alternative influence” on YouTube, from its extreme center to its more mainstream fringes.

“My overall observations indicate that YouTube has not made changes that significantly impact the network of influencers I wrote about last year,” Lewis said. “In my own research, I wrote about how deeply embedded YouTube’s problems were, and how to have any significant impact related to white supremacy on their platform, they would need to change some of their business structures and incentives for creators. Thus far, we haven’t seen changes in those areas.”

Lewis argues that YouTube’s problems are inherent to its business model. The more viewers a channel has, the more money it brings in. “Unfortunately, bigotry can attract big audiences, so creators have an incentive to create bigoted, and thus financially lucrative, content,” she said. “So I do think there’s a lot to be said for YouTube reconsidering their monetisation structures.”

In addition to channel bans and video deletions, YouTube often punishes uploaders who violate its policies by demonetising their content. This prevents them from running ads served by Google on their videos — and receiving the associated revenue.

Recently, YouTube vowed to demonetise channels that “repeatedly brush up against“ its hate speech policy, but this may do more to protect advertising partners from negative press than actually discourage noxious behaviour.

“[Demonetization] doesn’t actually take away their ability to broadcast harmful content to a large audience,” said Lewis. “Thus, creators who have been demonetised will often continue to broadcast the same content as before, while also claiming they are being penalised, and continuing to monetise their content through other means.”

It’s easy to see how this works in practice. The site’s new hate speech policy was announced just days after Vox journalist Carlos Maza posted a viral Twitter thread detailing years of targeted, intensely homophobic abuse and harassment by conservative YouTube personality Steven Crowder. (Google representatives insist the policy change had been in the works for months, and the timing was simply a coincidence.) Crowder’s content was demonetised soon after for an “ongoing pattern of egregious behaviour“ (not any individual video) but his channel with 4 million subscribers remains active.

That means Crowder can continue posting links on the site to his webstore, which recently sold a shirt with the slogan “SOCIALISM IS FOR F*GS.” Asked about Crowder’s YouTube-facilitated revenue stream, the company simply stated that Crowder will need to stop selling the homophobic shirts (among other unspecified reforms) to be re-monetised.

Crowder is hardly alone among hateful uploaders. In many videos about YouTube’s new policy posted to channels on our list, creators urged viewers to subscribe to support them directly by subscribing to their crowdfunding accounts on SubscribeStar or Patreon.

James Allsup, a white nationalist who attended the 2017 Charlottesville Unite The Right rally and was briefly an elected U.S GOP official in Washington state, punctuated one of his videos about getting demonetised by reading an ad from a VPN company that’s sponsoring him directly.

Robyn Caplan, who researches YouTube demonetisation at Data & Society with Microsoft Research’s Tarleton Gillespie, has found thousands of channels demonetised by YouTube over the years. Political channels of all stripes, not just ones associated with the far-right, have been stripped of their ability to directly generate income through pre-roll ads.

The issue is especially common with LGBTQ-focused channels, since YouTube’s automated system for flagging channels to demonetise has a tendency to associate tags like “gay” or “lesbian” with pornography, even when the content of those channels is entirely family-friendly.

When one YouTuber they researched did an experiment tagging the same video with “lesbian,” “gay,” and “straight,” the first two clips were demonetized, but the third was untouched.

Caplan and Gillespie found seven distinct previous periods of mass demonetisation on YouTube since 2016. Only one of those mass demonetisations — the “Adpocalypse” of Spring 2017, which was sparked by concerns about ads running before pro-ISIS propaganda — had anything to do with hate speech. The others, like what happened after YouTuber Logan Paul broadcast images of a dead body he discovered in a Japanese forest, were less political and more about content advertisers might find generally unsavoury.

A Google representative told Gizmodo that YouTube has a higher standard across the board for content it allows to be linked up with advertisers, who may be squeamish with having their ads running before footage of a neo-Nazi rally, regardless of whether it was uploaded by a neo-Nazi or by CNN.

Similar demonetisation issues hit news outlets covering recent pro-democracy protests in Hong Kong. Much of this controversial content, the representative said, is under some form of limited advertising, even when it comes from mainstream news organisations.

For Caplan, demonetisation as a (heavily automated) moderation policy applied to a platform of YouTube’s scale is doomed to fail, inevitably letting savvy bad actors skate by and instead impacting the marginalised communities that could benefit the most from an open publishing platform.

“YouTube is not willing to just demonetise “˜a politic’”¦ [it is instead] demonetising “˜politics’ because they don’t want to draw those lines,” said Caplan. “Marginalised groups that were using the platform to find their own communities — Black Lives Matter talking about police brutality, Trans YouTubers — end up demonetised. And they have less mobility to find other funding sources.”

A loss of pride

It’s not as if YouTube has never faced a large-scale moderation problem before. Some who have criticised tech platforms for failing to curb far-right hate speech have compared their efforts unfavourably to much more successful crackdowns on Islamist terrorist content.

Sarah Roberts, a UCLA professor who has spent the past decade studying how these platforms moderate content, said one reason for this difference is that it’s easier for platforms like YouTube to identify terrorist videos algorithmically.

She said these systems grew out of technology designed to stop the spread of child pornography. By creating a hashed database of known banned material, platforms can automatically check uploaded content against it, stopping it before it goes live. “When you look at terrorist recruitment material, it’s also the same content being recirculated again and again — like beheading videos or radical sermons,” Roberts explained. “But when the material doesn’t exist already in the world, there’s nothing to perform a mathematical check against.”

Far-right video content tends be new, Roberts notes. It’s mostly men (and a few women) ranting into their webcams, which makes it much more difficult to flag automatically.

Roberts, who interviewed hundreds of professional content moderators for her new book Behind the Screen: Content Moderation in the Shadows of Social Media, also warned that major scandal-triggered crackdowns tend to worsen the already miserable working conditions of the contractors often tasked with doing the moderation grunt work.

“I have no doubt that those kind of unflattering PR moments are played out always on the backs of the lowest-level workers,” Roberts said. “During these kinds of flare-ups, there is absolutely a seismic ripple for the workers on the ground.”

Google representatives told Gizmodo the company has wellness rules in place to protect moderators’ emotional health. For example, they explained, moderators are not expected to spend more than five hours a day reviewing potentially disturbing content and have access to regular counselling sessions.

While it’s unclear how Google’s moderators feel about the company’s approach to hate speech, others close to the company’s Mountain View headquarters have started voicing their frustration. Recently, 145 Google employees asked organisers of the San Francisco Pride Parade to pull Google’s sponsorship of the event. “If… YouTube, allows abuse and hate and discrimination against LGBTQ+ persons, then Pride must not provide the company a platform that paints it in a rainbow veneer,” the employees wrote in an open letter.

An organiser for the @NoPrideForGoog campaign, who asked not be identified by name due to concerns about doxxing and harassment, said that they directly asked YouTube’s leadership for more clarity about its approach to anti-LGBTQ content, but the company’s messaging has been just as opaque internally as it has been with the public at large.

“If they had told us there’s sensitivities around a public statement, but they think this content is wrong and are figuring out how to [deal with] it as fast as they can, we wouldn’t have been thrilled. But we likely would have kept our efforts inside the company, assuming there was also timely progress,” the organiser said.

“We get the exact same message that [YouTube CEO] Susan [Wojcicki] and [Google CEO] Sundar [Pichai] gave to the press, which is, “˜Sorry we hurt your feelings, and we’re taking a harder look, but leaving this up is the correct decision.’”

The @NoPrideForGoog organiser insisted the open letter’s signatories weren’t demanding a specific fix, just a good-faith commitment from leadership to sincerely address the problem. “Google, including YouTube, is full of capable, intelligent folks, both in engineering and otherwise,” said the organiser. “I’m quite confident that if the company were to commit to these changes we’re demanding, they will be able to figure out how to do it.”

For at least one employee, the company’s failures became too much to bear. In late June, an LGBTQ employee, who also requested not to be identified by name, quit in protest of Google’s handing of YouTube’s hate speech issues. In a resignation letter to Google management, the former employee wrote:

Google was once envisioned as a better type of company. It was, and continued to be a good place full of good people. This is not enough. The recent shallow Pride pandering from YouTube while they clutch their purse strings tighter around algorithmically-enforced hate speech click generators demonstrates that this is not enough. My coworkers at YouTube are well-meaning and good-intentioned people. It is not enough.

[…]

The apple is long past rotten. Google has become a monster and we should work to stop it.

The former employee said management made no effort to hear out their concerns.

“I informed my manager at our regularly scheduled one-on-one meeting that I would be resigning,” the employee said. “I offered to have some transition time to hand over the stuff [I was working on]. She said [she] would get back to me, so I went home for the day. I woke up the next day with my Google account shut off and an email to my personal account telling me to please return my laptop and badge to the nearest lobby.”

“I did so,” they recalled. “And that was that.”

Dousing the flames

Policing a platform as large as YouTube is an unenviable task. The company claims about 500 hours of video are uploaded to the site every minute, arguably making its moderation one of the most complicated free speech issues to face any bureaucracy in human history. Doing all of this as an advertising company, which Google fundamentally is, only makes the problem thornier.

At the same time, the site’s enormous audience (and estimated $US15 ($22) billion in annual revenue) means that it has an equally great responsibility to not to leave the world that has enriched it worse off. At the very least, YouTube appears to pursuing a sincere effort to reduce how its platform is used to spread discrimination. But as our list of hateful channels demonstrates, it will take much more before the platform is truly hate-free.

Whether YouTube succeeds largely comes down to a single question, one that the frustrated Googlers have their own answer to: Can those who have built their careers on demonising the vulnerable be gently convinced to just knock it off?


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.