The harassment of Julia and her team started in May.
That’s when Facebook expanded its fact-checking effort to Brazil. Fact-checkers at one of the participating organisations where Julia (not her real name) serves as director, were targeted by groups who thought the organisation was censoring the right.
The harassment became so vitriolic, the small team shut down all of their personal social media pages. They were getting messages from trolls “saying that they would shoot us, we wouldn’t see Brazil’s next president,” Julia told Gizmodo. “Also people said they were going to follow us one by one.”
“Every day we get at least two to four tweets or Facebook messages saying that we are either censors, we don’t deserve to be online, we should die, or something like that,” Julia said. “It’s pretty bad.” She added, “Brazil is going crazy right now. You’re either against fact-checking, or you’re very quiet about it.”
The climate around the election is especially volatile in Brazil, where politically-motivated violence is not uncommon. Brazil’s far-right presidential candidate Jair Bolsonaro was recently charged “with inciting hatred and discrimination against blacks, indigenous communities, women and gays,” the New York Times reported. His son, Eduardo, was charged with threatening a journalist. Fact-checkers play a crucial role in holding these types of public figures accountable—discrediting dangerously misleading or entirely false claims made by politicians and their followers—positioning themselves as targets for their vicious and emboldened base. A base that might view the non-partisan act of checking the accuracy of a claim as a form of censorship or a biased attack on their group and their ideologies.
It’s a rough tradeoff for what has come to be vital work in the age of misinformation. Julia’s organisation is just one of many around the world with content deals with Facebook. The tech giant launched a program with a few third-party fact-checkers at the end of 2016 as part of its strategy to fight fake news on its platform. The social network has touted a number of tactics in its war on bullshit, and it’s fact-checkers like Julia who are tasked with selecting and weeding out certain false claims.
Facebook’s program currently includes 17 countries. They are all certified by the International Fact-Checking Network, a non-partisan unit of the Poynter Institute launched in September 2015. Several fact-checkers participating in the program confirmed to Gizmodo that Facebook is paying them as part of the agreement. Factcheck.org, for instance, breaks down the funding it receives in a financial disclosure, which reveals that Facebook paid the organisation $US188,881 ($257,999) during the 2018 fiscal year, which ended June 30, 2018.
These are all third-party experts; Facebook doesn’t have an in-house team dedicated to these efforts. Its investment into this program has grown significantly since its launch two years ago, expanding globally. Though the teams are still far from expansive enough to come close to checking all flagged claims, a limitation Facebook itself acknowledged in a Hard Questions post in June, noting that there aren’t fact-checkers in every country and that in regions where there are, they don’t have enough labour or time to fact-check every single flagged claim. A Facebook spokesperson told Gizmodo in an email that the fact-checking program has been able to “reduce future views of debunked stories by 80%, but it’s worth noting that we don’t believe it’s a silver bullet to fighting misinformation.” Instead, the spokesperson listed a number of approaches as part of Facebook’s strategy to fight fake news, including removing fake accounts and demoting misleading content on the News Feed. The company has also recently stated that it wants to use machine learning to prevent the spread of misinformation on the platform. Delegating the problem to machines is hardly a novel sentiment for the social media behemoth.
They were brought on after the public—and Facebook—discovered that the platform was being exploited by bad actors, used as a vehicle for foreign election interference during the 2016 U.S. presidential election. As the scope of that exploitation came into focus, Facebook announced it would lean on these organisations to prevent such widespread abuse of its service from happening again, especially around political discourse.
Facebook’s failure to prevent the spread of misinformation on its platform is a global problem. In an especially dangerous act of negligence, Facebook’s inability to deal with misinformation and hate speech in Myanmar has acted as a catalyst for violence towards the Muslim population in the region. A Reuters investigation published this month revealed that the issue, which has been on Facebook’s radar since at least 2013, is still very much mishandled. Reuters found more than 1,000 examples of “of posts, comments, images and videos attacking the Rohingya or other Myanmar Muslims that were on Facebook” as of a week before the story was published, which was in mid-August. “The anti-Rohingya and anti-Muslim invective analysed for this article – which was collected by Reuters and the Human Rights Center at UC Berkeley School of Law – includes material that’s been up on Facebook for as long as six years,” Reuters reported. On Monday, Facebook announced that it was removing a number of Myanmar accounts and pages from the service—including the nation’s top military official Senior General Min Aung Hlaing—“to prevent them from using our service to further inflame ethnic and religious tensions.”
And a study published in May of this year from researchers at the University of Warwick found that communities in Germany with a higher-than-average Facebook use had more anti-refugee attacks, a relationship that “held true in virtually any sort of community—big city or small town; affluent or struggling; liberal haven or far-right stronghold—suggesting that the link applies universally,” the New York Times reported. The data linking Facebook to these attacks is even more unsettling—“Wherever per-person Facebook use rose to one standard deviation above the national average, attacks on refugees increased by about 50 per cent,” the Times reported.
Facebook’s partnership with fact-checkers is increasingly important as election periods begin around the world, and in countries where fact-checkers test tools before they’re made available to fact-checkers in the United States. Gizmodo spoke with seven fact-checkers based in various parts of the world to learn the current state of Facebook’s fact-checking efforts as multiple countries brace themselves for upcoming elections.
Several fact-checkers voiced concerns over a lack of transparency from Facebook with regards to specific data and general information on the impact of their work—grievances fact-checkers expressed last year in a report from Politico. Because fact-checkers have signed non-disclosure agreements with Facebook, however, they were unable to discuss some topics publicly. While they were evidently ok discussing certain parts of the program, there were certain topics they wouldn’t share on the record due to these agreements.
The fact-checkers we spoke with detailed the current state of the dashboard, a tool Facebook developed for its third-party fact-checkers. While it varies slightly per region, fact-checkers described it as a page with a list of article hyperlinks that have been flagged by a combination of users and Facebook’s algorithm and ranked based on how much they are being shared. When fact-checkers decide which items on the dashboard to review, they provide Facebook with one of the eight available ratings, ranging from “False” to “Satire” to “Opinion” to “Not Eligible.” When a fact-checker rates a story as “False”, it will show up lower in the News Feed, and pages and domains that routinely share false news will have their distribution demoted and their monetisation and advertising privileges removed.
While the fact-checkers expressed gratitude for the dashboard, most of them didn’t seem to find it particularly helpful as a serious tool to fight misinformation. Instead, the dashboard offered them some insight into what type of stories were being flagged as fake—a process executed by both users and an algorithm. But it’s not a flawless system, especially now that “fake news” is often interpreted by humans as news that doesn’t reaffirm their stiffly-held beliefs. “People tend to flag content that they disagree with,” Angie Holan, PolitiFact editor, told Gizmodo.
Saranac Hale Spencer of U.S.-based FactCheck.org, another third-party fact-checking partner with Facebook, said the dashboard is a useful tool when it comes to identifying what users might flag as suspicious, but characterised it as “sort of unremarkable.” She said that the organisation’s focus is to hold public officials accountable and that the Facebook project is just a small part of what they do.
“The dashboard is not really a tool for that if you’re looking for viral misinformation,” Phil Chetwynd, editor in chief of Agence France-Presse, another organisation hired by Facebook for its fact-checking program, told Gizmodo. Chetwynd added that AFP has other tools and strategies to identify what content is worth fact-checking, including Facebook-owned tools like CrowdTangle, but that the dashboard in its current state “is not a tremendous help often” for that purpose. A Facebook spokesperson confirmed to Gizmodo that the dashboard tool in its current state doesn’t prioritise content by how viral it is.
AFP is one of the organisations that have the ability to look at flagged photos and videos within the dashboard. Julia’s organisation also reported having this capability, but fact-checkers using the dashboard in the Philippines, Germany, and the U.S. said they didn’t have this access yet and weren’t sure on a timeline for when they would get it. A Facebook spokesperson said that fact-checkers in Argentina, Brazil, France, India, Indonesia, Ireland, Mexico, and Turkey currently have the ability to fact-check photos and videos.
The absence of this tool doesn’t mean fact-checkers aren’t pinpointing photos and videos outside of Facebook’s system. Ellen Tordesillas, a journalist who helped found VERA Files, a fact-checking nonprofit in the Philippines, said they have been fact-checking photos and videos since 2016. The organisation only started its partnership with Facebook in April of this year, but they have been doing fact-checking under the National Endowment for Democracy since the presidential election of 2016. It’s a separate project from Facebook’s, but Tordesillas said they are “closely related.”
The ability to fact-check photos and videos is an essential functionality, especially around political campaigns, as conspiracy theories and hoaxes are increasingly being spread through visual means, whether it’s a meme, a manipulated image, a Facebook Watch video, or media taken out of context. And there’s also the rising issue of deepfakes, ultrarealistic fake videos, a deeply unsettling new way to manufacture misinformation, an issue even Marco Rubio has taken on as his pet project. While none of the fact-checkers specifically mentioned deepfakes, they did mention manipulated photos as a source of misinformation. A Facebook spokesperson said the company is working on technical and human solutions to deepfakes, an effort involving its AI Research Lab.
The anatomy of a lie
Several fact-checkers detailed how hoaxes are spread visually in their region, typically around contentious political topics and political campaigns. Agencia Lupa, a fact-checking organisation working with Facebook in Brazil, said that they have already fact-checked two photos through Facebook’s dashboard. Both photos were real but put into false context. One was an arrest photo of journalist Miriam Leitão with accompanying text claiming she was part of an armed robbery of a bank that occurred during the dictatorship in Brazil, in October of 1968. The claim was completely false. At the time of the stated robbery, Leitão was 15 years old living in Caratinga, Brazil. The photo was taken four years later, when she was 19 years old and arrested and detained for months while pregnant, according to Agencia Lupa. This was during Brazil’s military regime. She was reportedly tortured and threatened with rape during her detainment, according to O Globo, a Brazilian newspaper, and after her release, sued for participating in the Communist Party of Brazil. Leitão was never charged or accused of taking part in an armed bank robbery.
The second photo was of Marco Antônio, son of ex-governor of Rio de Janeiro, Sérgio Cabral. The text accompanying the image, which was real, stated that Antônio, running for Congress, is not going to use his last name on the voting bill in order to distance himself from his father, who is now in jail. That claim is also false.
Jacques Pezet, a fact-checker for Germany-based CORRECTIV, another third-party fact-checker for Facebook, said he has been working on fact-checking a video separate from Facebook’s dashboard—they don’t have that capability yet. The organisation identified the video because far-right pages were sharing it. It was also flagged by a Twitter user who tagged Correctiv’s fact-checking account.
The video was taken by a Czech tourist who recorded the film crew shooting a scene of people floating in the sea. The tourist falsely alleged that the crew was staging fake deaths of refugees in the sea near Crete. The claim was then circulated to signal that the media manipulates the public with fake images. However, Pezet said that the organisation’s research indicated that the film crew was, in fact, shooting a scene, but it was for a dramatized historical documentary, Land of the Painful Mary, which is about Greek refugees in the 1920s relocating from Anatolia to Crete. Pezet had contacted the film crew and director to confirm they were filming a documentary and to prove that the subtitles and context being shared were taken out of context.
Chetwynd also noted that they have fact-checked real photos that have been taken out of context, specifically to manipulate the immigrant discussion. For instance, he said that they recently fact-checked a video, claiming to depict a Saudi man attacking a hospital clerk in London. It was shared more than 40,000 times in under a month on Facebook. The incident happened, and the video is real, but it was taken out of context “with the implication of immigrants coming in, causing trouble,” Chetwynd said. It was actually footage of a Kuwaiti man spitting on and assaulting an Australian nurse during an argument over money at a veterinary clinic in Kuwait.
Another example includes a video of a drunk Russian man attacking a security guard and nurses in northwest Russia. It was shared on the now-defunct Facebook page “SOS anti-white racism” more than 100,000 times, according to First Draft, falsely presenting it as a foreigner attacking French hospital employees. Chetwynd said that this same video was used in different countries with a different context, including in Turkey, Spain, and Italy.
“Whenever you see certain types of individuals, certain individuals, certain personalities become part of the discourse, or certain critics become prominent for that week or for a certain period, then you notice they figure more in these sites, these dubious sites we are monitoring,” Gemma Mendoza, who leads fact-checking efforts as well as research on disinformation on social media at Philippines-based Rappler, another fact-checking organisation involved with Facebook’s program, told Gizmodo. “You see those patterns. It seems there’s a content plan like they are also in tune with current events except the content is, in many cases, made up.” The fact-checkers have noticed that the sites routinely posting misleading or fake content connect with certain current events—that they are keeping up with the news and often publishing misleading and false information related to current events, accordingly.
Aside from the widespread capability to fact-check photos and videos, which is seemingly in beta, fact-checkers still want more information from Facebook on the impact of their work. “They promised some metrics to us,” Mendoza said. While they’ve seen hypothetical numbers, she said, they haven’t seen exact numbers specifically with regards to the material they are fact-checking. She also noted that many false claims don’t come from just one URL; they’re circulated through many copycat sites, and she would like to know if the system is tracking the fact-checked claim itself rather than just the URL it’s attached to. “It’s like we’re running after sites all the time and then we don’t know if the claim is still circulating within the system,” she said. A Facebook spokesperson said that the company does have “machine-learning driven similarity detection processes in place to catch duplicate hoaxes.” In a blog post published in June touching on this new technique, Facebook claimed that “a fact-checker in France debunked the claim that you can save a person having a stroke by using a needle to prick their finger and draw blood. This allowed us to identify over 20 domains and over 1,400 links spreading that same claim.”
It’s unclear whether it is sophisticated enough to vet all similar false claims from differing domains on the internet—but its existence points to the breadth of what these fact-checkers are up against. “It’s like a whack-a-mole game,” Mendoza said, characterising the effort required to keep up with these fly-by-night websites that are perpetually popping up on Facebook.
Chetwynd said that “What everybody wants from Facebook is an improvement in the quality of information, in the quality of misinformation being flagged to us,” alluding to an improved flagging system for misinformation spread on the platform. “That is something they are still really struggling to provide for us.”
“We’re frenemies,” Mendoza said, referring to the organisation’s relationship with Facebook.
Transparency is not a lofty ask. The fact-checkers are effectively asking for evidence that the work they are doing is making a difference. And when it’s meticulous work—work that for some leads to a litany of hate messages and death threats—it’s a far cry from a compromise.
On a global scale, the fact-checking partnership is perhaps one of Facebook’s biggest self-professed solutions to the issue of misinformation on its platform. Rather than developing a dedicated in-house team to tackle the issue, Facebook has contracted out the problem. Holan thinks that was smart. “Facebook has created the platform and understands how the platform works, and we’re the fact-checkers and we fact-check the content,” she said. (Although it can be argued that Facebook really doesn’t get how its platform works.)
“I don’t think we’re going to reach some state of perfection with no misinformation online,” Holan said, citing human nature. But she did say that she believes tech platforms are beginning to understand that they control what type of information can proliferate. “I think the Alex Jones thing that happened ... [recently],” Holan said, “his content being removed from platforms is very interesting and a turning point of the platforms accepting the role that they have as gatekeepers.”