Facebook is banning spreading false information about voting requirements and will also “fact-check fake reports of violence or long lines at polling stations” as part of its preparations for the 2018 midterm elections, Reuters reported on Monday.
Facebook’s general policy is not to delete fake or misleading information circulating on the platform per se—loathe to invite partisan outrage and generally hesitant to ban content that stops short of incitement outright, it’s used methods like removing pages it deems spammy or engaging in “coordinated inauthentic behaviour.” It also uses fact-checkers (albeit ones who often allege they are overburdened and under-supported) and machine learning to demote fraudulent or misleading posts as well as the pages that spread them in news feeds.
However, Facebook has made a big show of how much it cares about elections after the fake news debacle in 2016, which really screwed all of us over. Reuters wrote the decision to ban lying about voting restrictions outright is being enacted under pressure from Congress:
The ban on false information about voting methods, set to be announced later on Monday, comes six weeks after Senator Ron Wyden asked Chief Operating Officer Sheryl Sandberg how Facebook would counter posts aimed at suppressing votes, such as by telling certain users they could vote by text, a hoax that has been used to reduce turnout in the past.
The information on voting methods becomes one of the few areas in which falsehoods are prohibited on Facebook, a policy enforced by what the company calls “community standards” moderators, although application of its standards has been uneven. It will not stop the vast majority of untruthful posts about candidates or other election issues.
As Reuters noted, spreading lies about when and where to vote is already banned by Facebook. CNBC reported that the changes extend that ban to include “posts about exaggerated identification requirements,” though it does not appear that lies or misleading statements about conditions at the polling places themselves will face anything more onerous than the standard fact-checking procedure.
The change also does not extend to generalized propaganda and misinformation about the elections.
“We don’t believe we should remove things from Facebook that are shared by authentic people if they don’t violate those community standards, even if they are false,” Tessa Lyons, a Facebook product manager, told Reuters.
According to Bloomberg, additional methods Facebook is opening “direct lines of communication with the National Association of Secretaries of State and the National Association of State Election Directors,” as well as letting users “directly report instances of voter suppression when they see a post in their news feeds.”
Reuters wrote that Facebook’s cybersecurity policy chief, Nathaniel Gleicher, also disclosed that the company had considered banning all hacked materials—something with obvious ramifications for material leaked by whistleblowers or passed on to journalists—while other sources said it had briefly mulled banning all political ads. Neither of those steps were taken.
It’s debatable whether any of this will work. Facebook has long stated it takes the issue seriously, but the measures it does have in place have been nowhere near enough to stop the spread of junk, fake, and hoax content on the site. Nor is this problem unique to Facebook: Massive tech platforms in general like Twitter and Google are similarly struggling to rein in the beast they’ve unleashed, and it certainly doesn’t help that they’d prefer to limit their responsibility for what goes on via their platforms in the first place. In any case, Facebook wants everyone to know it’s at least trying to do something before they inevitably start yelling at it again.