Facebook wants you to know that it is committed to stopping the spread of internet hoaxes. But it requires some mental gymnastics to understand how signal-boosting comments with the word "fake" in them would help fight misinformation. In a recent test, however, that's exactly what the social network did.
— joanna barrett (@jobrigitte) October 23, 2017
As the BBC reports, Facebook conducted an experiment last month where messages containing the word "fake" were pushed to the top of comment threads below links for some users. Thus Facebook comments below stories from The New York Times, the BBC, The Guardian and other news outlets all began with messages stating "fake".
"We're always working on ways to curb the spread of misinformation on our platform, and sometimes run tests to find new ways to do this. This was a small test which has now concluded," a Facebook spokesperson told the BBC. "We wanted to see if prioritising comments that indicate disbelief would help. We're going to keep working to find new ways to help our community make more informed decisions about what they read and share."
Back in March, Facebook debuted a feature intended to better highlight fake news stories on its site by marking them as "disputed" by third-party fact-checkers. While this doesn't prevent users from sharing a story, it gives them a non-partisan expert opinion on the truthfulness of the article. But simply promoting any comment with the word "fake" under stories that may actually be legitimate is a mystifying strategy for curbing nonsense on the platform.
Facebook, of course, has a storied history of trying out little "tests" on its users. The company messed with the emotional content on the News Feeds of nearly 70,000 users in June 2014 to determine whether happy or negative content online can directly affect someone's mood. (It can.) The company also experimented with an "I Voted" button on the platform for years to see how it influenced voting behaviour. And in 2012, Facebook's Data Science Team randomly hid links hundreds of millions of times to "assess how often people end up promoting the same links because they have similar information sources and interests," according to Technology Review.
It's hard to know whether Facebook sincerely believed that elevating comments with the word "fake" in them would help users determine which stories were factually accurate. Or if, perhaps, this was just another social experiment to see how these types of comments influence its users. We reached out to Facebook for comment but had not heard back at time of writing.