How Social Media Platforms Are Preparing for the U.S. Election, And the Aftermath

How Social Media Platforms Are Preparing for the U.S. Election, And the Aftermath

As stores run low on plywood barricades, guns fly off the shelves, and the U.S. president plans not to commit to a peaceful transfer of power, social media companies have begrudgingly developed contingency plans for U.S. election day and the confusion that’s sure to follow. Belying most of these policies are the likely scenarios that 1) a candidate will claim victory before the results are determined and 2) there will be some level of violence, potentially spurred on by misinformation. Let’s take a look at what they’re doing to mitigate that.

Keep in mind that these companies often talk a good game, but then fall well short of moderating content up to their self-imposed standards. As such, we continue to pray a solar flare will wipe out the internet today, but the odds of anything good happening this year just aren’t in our favour.

Facebook and Instagram

Facebook took a lot of deserving heat for its role in allowing disinformation to flourish during the 2016 election. Since then, it’s invested in tamping down on this sort of behaviour or at least giving reporters the impression it’s doing so.

In the latter category, Facebook has launched its “war room” of experts, which it claims has worked against election disinformation in the past, including swiftly removing lies that Brazil’s 2018 election date was postponed, and stifling calls for violence in the country. Facebook also says, in a recent blog post titled “Preparing For Election Day,” that the company has 35,000 people on payroll “working on safety and security issues” like coordinated inauthentic behaviour, disinformation campaigns, and voter interference. That’s an impressive number to tout, though it seems like that’s just the number of content moderators Facebook already employs and doesn’t appear to have increased at least a year.

Proactively, the company paused the purchase of new political ads last week, for an indefinite period of time, to “reduce opportunities for confusion or abuse.” And last year, it launched an ad library database with information on advertiser spending and reasons for ad removals. Unfortunately, bugs and heavy use by researchers during situations it was designed for (like the U.S. election) have a tendency to make it less than reliable. And in the short term, Facebook claims to have temporarily stopped recommending political pages to users.

Facebook will characteristically label, not delete, a post by a party or candidate that prematurely claims victory. Instead, a correction will inform users that “votes are still being counted.” Facebook doesn’t say what it will do if a non-party or non-candidate page or user declares a Donald Trump or Joe Biden victory, but the platform does note that it’s putting a big sign up on the site in case there’s any confusion:

If the candidate that is declared the winner by major media outlets is contested by another candidate or party, we will show the name of the declared winning candidate with notifications at the top of Facebook and Instagram, as well as label posts from presidential candidates, with the declared winner’s name and a link to the Voting Information Centre.

As for violence, in the same post from early last month states that:

[W]hen we become aware of them, we will also remove calls for people to engage in poll watching when those calls use militarised language or suggest that the goal is to intimidate, exert control, or display power over election officials or voters

It seems to have no specific plan for violence outside of polling sites and will rely on existing policies that prohibit “militarised social movements” and calls for violence. We’ll have to put our faith in Facebook’s ability to avoid another “operational mistake,” such as the 455 user reports of potential violence it ignored prior to the Kenosha shooting.

For the powder keg we’re expecting in days to come: Facebook reportedly plans to roll out special security measures to slow viral content, usually reserved for “at-risk” countries like Myanmar. Our democracy is strong and healthy.

Twitter

Twitter is, of course, Trump’s preferred megaphone. But it’s at least somewhat positioned itself as the anti-Facebook, no longer totally capitulating to conservatives and proactively featuring fact-checks.

Unlike Facebook, Twitter has blocked campaign ads altogether. And over the past year, the company has debuted a creative array of new label formats largely on Trump’s timeline: removing some tweets, hiding others, and surrounding others with factual context. It also doesn’t plan to remove Trump’s (or Biden’s) unsubstantiated claims to victory, but Twitter’s labelling will be more conspicuous, probably pleasing no one and landing CEO Jack Dorsey in another kangaroo court session.

On Monday morning, in a tweet thread, Twitter laid out its election results plan, promising that should a candidate claim victory prematurely, it will append a label stating that official sources have not yet called the results; a warning will also appear before users attempt to retweet it. They’ll consider a result official once it’s confirmed by a state election official or at least two of the following outlets: ABC News, the Associated Press, CBS News, CNN, the Decision Desk, Fox News, or NBC News. Notably, the New York Times isn’t on the list.

The company also plans to add a warning or remove content “inciting interference with the election, encouraging violent action or other physical harms.”

YouTube

Unlike Facebook and Twitter, YouTube hasn’t announced a flurry of new policies. Chief product officer Neal Mohan told the New York Times that the company will follow standard procedure and if necessary senior officials at YouTube will make “nuanced” decisions.

YouTube will use its Intelligence Desk, a team formed in 2018, which monitors for emerging, dangerous trends. Similar to banners it has placed in search results with links to information on voting, it will include banners for election results on Election Day. Under the existing community guidelines, it has been removing videos that mislead viewers about where and how to vote, and demonetising those which include claims that could disenfranchise voters.

A candidate’s premature declaration of victory would depend on context, YouTube told Gizmodo. If it included incitement to violence, YouTube would remove the video for violating its policies, but falsely claiming “I won” would simply come with an election results label. But, YouTube says, that this is where its policy to reduce the spread of misinformation would kick in, in the form of prioritising trustworthy news sources.

TikTok

Aside from being flung into the dumbest political power play of all time, TikTok has mercifully avoided controversy around censoring politicians’ speech, possibly because its core user base isn’t even of voting age. Last year, it also banned paid political ads and sponsored content.

TikTok says that it will be partnering with fact-checkers to “reduce discoverability” of content wherein a candidate or user attempts to prematurely announce a victory. It also stated it will be using the Associated Press as its authoritative source for results, and that “out of an abundance of caution, if claims can’t be verified or fact-checking is inconclusive, we’ll limit distribution of the content.” Any content that attempts to suppress voting will come with an election guide banner. TikTok also claims that it’s been working with the U.S. Department of Homeland Security in order to prevent foreign interference.

Reddit

Reddit has yet to get on board the trend of labelling misinformation, but will it remove objectively false information about the election? Unclear. When asked about its plans for the election, a Reddit spokesperson told Gizmodo that broadly, “Reddit’s content policy is written to be flexible to address different forms of content manipulation or misinformation.” The spokesperson went on to name misinformation that undermines civil engagement or misrepresentation of election results — but Reddit’s exceptionally short list of rules doesn’t mention misinformation. “We have dedicated teams that enforce our site-wide policies and proactively go after bad actors on the site,” the spokesperson added. “And have built internal tools to detect and remove policy-breaking content.”

Reddit’s security team has elaborated a little, and their plan seems to be largely: Let’s let the users decide. “Downvote and report any potential election misinformation, especially disinformation about the manner, time, or place of voting,” a security report from last month reads.

Though its communities may not always enforce them consistently, Reddit does have existing policies against calls to violence, and does not appear to have developed new ones for this election.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.