The Australian arm of social video platform TikTok has declined to specify exactly how many of its users saw a graphic suicide video that went viral on its platform earlier this year.
In September, a graphic video showing a Mississippi-based man’s suicide began circulating on the platform.
Users began to upload it and, once the company began to implement filters that stopped the original version of the video being uploaded, remixed and altered it to avoid detection by the company’s algorithms.
Some even spliced the footage into other TikTok videos to trick unsuspecting users into watching it.
In response, parents and schools were encouraged to keep their children off TikTok. And the Prime Minister Scott Morrison even chimed in, condemning the platform and called for better safeguards.
Just a few weeks later, the company’s representatives appeared in front of the Senate Select Committee into Foreign Interference through Social Media.
In between questions about their links to China and their moderation policies, Australian Greens Senator Sarah Hanson-Young asked about the company’s response to the video.
TikTok Australia’s general manager Lee Hunter told the committee that more than 10,000 different versions of the video were uploaded to the platform in an attempt to avoid the technologies that were detecting and deleting the footage.
But one question Hunter didn’t answer at the time was how many Australian accounts saw versions of the video and how much money, if any, was made directly from the videos, asked by Senator Hanson-Young. He told the Committee that he would “take it on notice”, AKA he’ll get back to them.
But now, we have an answer — technically. According to the company “a very, very small percentage of Australian user saw this content”. Further, the company said that the vast majority of those who did see it actively sought it out.
But ultimately the company didn’t give a specific number. Nor did they answer how much money they made, if any.
Now, there’s no obligation for the company to say anything. But as social media companies brag about how many videos they delete or fact check, it’s difficult to judge just how good a job they’re doing at moderating without more information. And that’s before even considering how far some of the videos spread and how many non-offending videos get flagged as ‘false positives’.
If TikTok won’t — or perhaps even can’t — say how many people saw a video, it’s near impossible to know how well they’re protecting their users.