TikTok is currently trying to remove videos of a graphic suicide from its platform as some users have been hiding clips of the graphic imagery into seemingly harmless videos.
A TikTok spokesperson told Gizmodo the company was automatically detecting and flagging these clips as they were being uploaded.
“We are banning accounts that repeatedly try to upload clips, and we appreciate our community members who’ve reported content and warned others against watching, engaging, or sharing such videos on any platform out of respect for the person and their family,” they said.
“If anyone in our community is struggling with thoughts of suicide or concerned about someone who is, we encourage them to seek support, and we provide access to hotlines directly from our app and in our Safety Centre.”
How did the TikTok video end up on the platform?
On Monday, people began to raise the alarm about the graphic footage circulating online. The footage reportedly captured a Mississippi-based man’s suicide that had been broadcast on a Facebook Live stream.
IMMINENT TRAUMA/PTSD WARNING:
My friend Ronnie McNutt committed suicide via hunting rifle on a FB livestream in front of myself and 200 others. The video is now making its rounds on @tiktok_us.
If you see a video of a bearded man at a desk on TikTok, CLICK AWAY AND REPORT IT.
— Alexis Tangman (@AlexisTangmanVA) September 7, 2020
Soon afterwards, people began to share the shocking video across the web, Facebook, Twitter, Snapchat and TikTok. In some cases, users were altering the footage to avoid any algorithmic filter stopping them from uploading.
Nefarious users began to splice the footage into other innocuous looking TikTok videos. People complained that their children were unwittingly stumbling across footage, recommended by the company’s ‘For You’ algorithm.
The edited versions start with something benign or a rip of an influencer’s video to make ppl stop scrolling, then cut to the horrible video before they have a chance to look away. https://t.co/e24epk34jf
— Taylor Lorenz (@TaylorLorenz) September 7, 2020
Why is this TikTok suicide video a test for the platform?
Moderating a social media platform is difficult at any size. And that’s before you consider the platform’s major audience: horny teens.
But TikTok is different in at least one important way to other major platforms: it’s relatively new. Launched in 2017, TikTok is a baby in social-media-network-years.
This means the company has the benefits of advances in technology as well as lessons learned from other platform’s mistakes.
And this is far from the first time a platform’s users have attempted to circumvent platform’s attempts to stop graphic content from spreading.
As pointed out by Twitter user Daniel Sinclair, this is not dissimilar to Facebook and YouTube users’ attempts to share the video of the 2019 Christchurch terrorist attack.
It doesn’t have the same radicalization backdrop, but this is TikTok’s Christchurch. Both events faced troll reuploads that defeated algorithmic gating. https://t.co/iljrKoVCsu
— Daniel Sinclair (@_DanielSinclair) September 7, 2020
Such footage may even be subject to the Australian Government’s abhorrent violent material legislation, which makes it an offense to fail to remove “abhorrent” videos and images quickly. The eSafety Commissioner has been contacted for comment.
TikTok’s attempts to stop the spread of the footage is one of the first, well-publicised tests of their moderation capacity. And it certainly won’t be its last.