TikTok Battles With its Users to Stop the Spread of a Graphic Viral Video

TikTok Battles With its Users to Stop the Spread of a Graphic Viral Video
Image: Getty Images

TikTok is currently trying to remove videos of a graphic suicide from its platform as some users have been hiding clips of the graphic imagery into seemingly harmless videos.

A TikTok spokesperson told Gizmodo the company was automatically detecting and flagging these clips as they were being uploaded.

“We are banning accounts that repeatedly try to upload clips, and we appreciate our community members who’ve reported content and warned others against watching, engaging, or sharing such videos on any platform out of respect for the person and their family,” they said.

“If anyone in our community is struggling with thoughts of suicide or concerned about someone who is, we encourage them to seek support, and we provide access to hotlines directly from our app and in our Safety Centre.”

How did the TikTok video end up on the platform?

On Monday, people began to raise the alarm about the graphic footage circulating online. The footage reportedly captured a Mississippi-based man’s suicide that had been broadcast on a Facebook Live stream.

Soon afterwards, people began to share the shocking video across the web, Facebook, Twitter, Snapchat and TikTok. In some cases, users were altering the footage to avoid any algorithmic filter stopping them from uploading.

Nefarious users began to splice the footage into other innocuous looking TikTok videos. People complained that their children were unwittingly stumbling across footage, recommended by the company’s ‘For You’ algorithm.

Why is this TikTok suicide video a test for the platform?

Moderating a social media platform is difficult at any size. And that’s before you consider the platform’s major audience: horny teens.

But TikTok is different in at least one important way to other major platforms: it’s relatively new. Launched in 2017, TikTok is a baby in social-media-network-years.

This means the company has the benefits of advances in technology as well as lessons learned from other platform’s mistakes.

And this is far from the first time a platform’s users have attempted to circumvent platform’s attempts to stop graphic content from spreading.

As pointed out by Twitter user Daniel Sinclair, this is not dissimilar to Facebook and YouTube users’ attempts to share the video of the 2019 Christchurch terrorist attack.

Such footage may even be subject to the Australian Government’s abhorrent violent material legislation, which makes it an offense to fail to remove “abhorrent” videos and images quickly. The eSafety Commissioner has been contacted for comment.

TikTok’s attempts to stop the spread of the footage is one of the first, well-publicised tests of their moderation capacity. And it certainly won’t be its last.