On Tuesday TikTok announced it will start enforcing a QAnon ban on material related to the group. The company has stated the videos violate its disinformation policy.
TikTok Bans QAnon related material
This is not the first step that TikTok has taken to eradicate QAnon content from the platform. Back in July in blocked hashtags related to the conspiracy theory movement, including #QAnon, #OutofShadows and #QAnonTruth.
“Content and accounts that promote QAnon violate our disinformation policy and we remove them from our platform,” a TikTok spokesperson said in a statement.
“We’ve also taken significant steps to make this content harder to find across search and hashtags by redirecting associated terms to our Community Guidelines. We continually update our safeguards with misspellings and new phrases as we work to keep TikTok a safe and authentic place for our community.”
TikTok’s disinformation policy can be found on the Community Guidelines section of its website.
“Content that is intended to deceive or mislead any of our community members endangers our trust-based community,” the guidelines state. “We do not allow such content on our platform. This includes activities such as spamming, impersonation, and disinformation campaigns.”
It’s not the first
TikTok is comparatively late to the party when it comes to removing QAnon content. Twitter, Reddit and YouTube have all taken similar measures.
The most recent social media platform to place a blanket ban on QAnon was Facebook. In early October the social media giant announced that Pages, Groups and Instagram accounts related to the conspiracy group would be removed.
Previously, Facebook only removed QAnon content that actively promoted violence.
“Starting today, we will remove any Facebook Pages, Groups and Instagram accounts representing QAnon, even if they contain no violent content,” a Facebook blog post read.
“We are starting to enforce this updated policy today and are removing content accordingly, but this work will take time and need to continue in the coming days and weeks. Our Dangerous Organizations Operations team will continue to enforce this policy and proactively detect content for removal instead of relying on user reports.”