Amid Flood Of Mosque Shooting Videos, YouTube Sees It's Unprepared For A New Kind Of Virality 

Photo: Getty

The growing reality for social networks is that they will perpetually be exploited in new and terrorizing ways and there may not be a sweeping solution for bad actors. The hope is that solutions outpace abusers. This race was on disturbing display over the weekend as a horrific video of a gunman opening fire on worshippers in two mosques in New Zealand went viral, and the world’s leading video-sharing service scrambled to scrub the deluge of uploads.

ScoMo Wants To Suspend Live Streaming

Prime Minister Scott Morrison is calling for the suspension of live streaming on social media in the wake of Friday's terrorist attack in New Zealand. Roughly 17 minutes of the attack was live streamed on Facebook and despite attempts by the authorities to suppress the video and the attacker's manifesto, it was shared exponentially across various social media platforms and news sites.

Read more

The alleged shooter is now in custody, at least 50 people are dead, and many others are hospitalized with injuries. But another key facet of the massacre was spreading a hate-filled 74-page manifesto and first-person footage of the shootings that was captured on a body cam worn by the perpetrator. The disturbing footage immediately began to spread and YouTube removed every instance it could find. Traditionally, viral content spreads by people sharing it from its original source and if it’s caught early enough, cutting off the head of the snake can do a lot to slow it down. But in this case, trolls and ideological allies of the gunman immediately began pounding YouTube’s servers with fresh uploads of the graphic footage.

“Every time a tragedy like this happens we learn something new, and in this case it was the unprecedented volume” of videos, Neal Mohan, YouTube’s chief product officer, told The Washington Post. Stating the obvious, Mohan said they “would have liked to get a handle on this earlier.”

Mohan also told the Washington Post that the Christchurch attack, “was a tragedy that was almost designed for the purpose of going viral,” adding that YouTube has “made progress, but that doesn’t mean we don’t have a lot of work ahead of us, and this incident has shown that, especially in the case of more viral videos like this one, there’s more work to be done.”

The video uploaded over the weekend wasn’t unique in its footage of a mass shooting, but in its point-of-view: the gunman livestreamed the video from a body cam. And Mohan reportedly said that the video was shared as swiftly as one upload per second in the immediate aftermath of the shooting. YouTube hasn’t released exact figures on how many uploads there were of the attack, but the numbers that have been made available on other social platforms are staggering—Facebook said that it took down 1.5 million videos of the attack within the first 24 hours after the shooting.

Mohan is correct in his assertion that YouTube is the model platform for such gruesome virality. YouTube is the most powerful video-sharing platform in the world. It also has a very credible reputation for suggesting videos centered around conspiracy theories and white supremacy. For Nazis, what better way to spread their ideologies and incite hateful violence than to tap into this easily exploited and Herculean platform?

YouTube does have certain systems in place to flag and remove hateful and violent content. It can use a hashing system to identify videos that match the original video, automatically detecting and deleting any copies that are subsequently uploaded. Unfortunately, this system isn’t effective when it comes to all types of subtle manipulations to the video. As the Washington Post noted, users that uploaded videos of the Christchurch shooting made some tweaks to the original video, including watermarks, logos, size alterations, and animations.

“Many violent extremist groups, like ISIS, use common footage and imagery,” YouTube wrote in a Twitter thread on Monday. “These can also be signals for our detection systems that help us to remove content at scale.” The company added: “However, every breaking news event is unique and there are no reference files provided in advance. And there’s also a constant flow of new footage, and countless variations of known footage uploaded in the hours immediately after an event. These factors present a significant challenge, but we are continually working to improve our detection systems.”

When we reached out to YouTube with some clarifying questions about how its filtering system works, a spokesperson simply redirected us to this Twitter thread that gives the broad outlines of its functionality.

YouTube here is partly referencing its Content ID system, which lets copyright owners preemptively submit files to the platform which will then be cross-referenced with any videos uploaded to the service. These creators can then keep track of when a video is uploaded that matches their content. While this applies to more innocuous material, like music and film, it poses the question of whether this same system can apply to videos like the Christchurch shooting. But the footage of the mass shooting flooding the social networks over the weekend is unlike existing media in YouTube’s databases, and so its system is unable to preemptively flag it. And, as previously mentioned, was uploaded with different variations, gaming it and evading the system in place. This, combined with the staggering volume of uploads, created a kind of virality that is unprecedented.

“Like any piece of machine learning software, our matching technology continues to get better, but frankly, it’s a work in progress,” Mohan told the Washington Post.

Because the system in its current state is unable to detect certain variations, and because even thousands of humans are not enough to urgently eyeball and review flagged videos, Mohan chose to allow the final decision to be made by the machines. As we’ve seen with other failures for AI-driven moderation, benign videos are often accidentally removed. That was likely the case here, but ultimately the team chose expediency over precision. Creators whose content was unfairly removed can file an appeal.

The issue plaguing YouTube is that trolls and extremists have the incentive to hone their efforts on its platform—their content will almost certainly get the most views and subsequently the most shares. And YouTube is infamous for recommending hate-related content, meaning it is exacerbating the very problem it is trying to eradicate. The content is coming from inside the house. In the wake of a tragedy caught on film, if extremists push that content hard onto the one leading platform (while also slightly manipulating some of the versions) it’s inevitable that some are going to slip through the system’s cracks and find their way into recommendations.

As YouTube mentioned in its Twitter thread on Monday, it is always working on ways to better detect violating content. And as we previously mentioned, our only saving grace is that those building the platforms can remain at least one step ahead of those trying to abuse them. It’s an unsettling reality and one that may have social networks reckoning with their role as catalysts for this new disturbing virality.

[The Washington Post]

Trending Stories Right Now