Facebook’s Leaked Docs: Here’s What You Need to Know

Facebook’s Leaked Docs: Here’s What You Need to Know

Facebook’s troubles aren’t slowing down — if anything, they’re mounting faster and faster.

Former Facebook employee Francis Haugen has leaked thousands of pages of internal documents about the company, as well as filed whistleblower complaints with the Securities and Exchange Commission, which have provided a deeply unflattering and, in some cases, disturbing look at the discrepancy between how executives portray the company publicly versus what Facebook knows about its products from internal research. Much of it — ranging from studies showing Instagram’s psychological harm to some young girls to the existence of a program called XCheck which selected some users as above the rules — has already been covered. But this weekend, a consortium of 17 different news outlets given access to the documents released a wave of other damning articles going even deeper on the troubles at the company.

The articles paint a picture of a company roiling in internal conflict, with its own staff often in open opposition to management like CEO Mark Zuckerberg. Many appear to show Facebook’s own researchers appalled at their findings on how the site actually works, as well as frustrated to the point of resignation by management’s inaction on or interference against their efforts to find solutions.

Facebook has issued denials on some of the accusations and portrayed others as misrepresentations of what the internal documents actually say. In a statement to Gizmodo via email, a Facebook spokesperson wrote, “At the heart of these stories is a premise which is false. Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or wellbeing misunderstands where our own commercial interests lie. The truth is we’ve invested $US13 (AU$17) billion and have over 40,000 people to do one job: keep people safe on Facebook.”

Regardless, here’s a roundup of some of what’s been reported in the Facebook news blitz of the last few days.

Senior employees shield right-wing publishers from consequences for breaking the rules

Facebook likes to insist that it doesn’t take one side or the other on political debates — yet according to the Wall Street Journal, internal discussions at the company show that more senior employees at the company often moved to shield right-wing publishers from being penalised or otherwise facing consequences for content that, at the very least, seemed to push the boundaries of the site’s rules. The Journal’s report shows that many staffers at the company believe Facebook brass are deliberately choosing not to punish right-wing sites and pundits for violating its terms of service to avoid accusations of political bias.

In particular, according to the Journal’s report, employees believed that Facebook was coddling far-right internet hellhole Breitbart, which Facebook bizarrely decided to include in its prominently featured News Tab. (Facebook execs have publicly shot back that they also feature “far-left” news sites, which does not appear to be even remotely true unless you view mainstream media as a communist conspiracy.)

One Facebook employee pointed to Breitbart’s extremely hostile coverage of Black Lives Matter protests in 2020, saying they believed “factual progressive and conservative leaning news organisations” needed to be represented but there was no reason that courtesy should extend to Breitbart. A senior researcher shot back that the political blowback would make that a “very difficult policy discussion” and “news framing is not a standard by which we approach journalistic integrity.”

A Facebook spokeperson told Gizmodo, “We make changes to reduce problematic or low-quality content to improve people’s experiences on the platform, not because of a Page’s political point of view. When it comes to changes that will impact public Pages like publishers, of course we analyse the effect of the proposed change before we make it.”

The Journal also wrote that in 2019, Facebook killed a program called “Informed Engagement” that would have limited sharing stories without first reading them, due to fears it could disproportionately impact conservative media and lead to yet more yelling about bias. Another engineer compiled a list of rules violations by right-wing publishers, claiming that Facebook’s practice of assigning “managed partners” (i.e., company handlers) to these sites helped them escalate disputes to senior staff more concerned about avoiding the perception of anti-conservative bias.

The paper which has generally been more sympathetic to the Republican obsession with conspiracy theories that tech firms are systematically trying to censor them than other outlets, didn’t find any evidence of similar internal debate about left-wing publishers.

Facebook created Stop the Steal, then failed to stop it

The New York Times reported on another issue that had the Facebook rank-and-file up in arms: extensive internal research on the spread of conspiracy content and disinformation about the 2020 elections across the site. For example, one researcher started a blank account and was quickly recommended QAnon content, while another researcher estimated that 10% of all views of political content on Facebook was on posts falsely claiming the election was a sham.

In the first example, a researcher created a “conservative mum” account in July 2019 and was bombarded with QAnon content; within three weeks, the researcher wrote the account “became a constant flow of misleading, polarising and low-quality content.” In an exit note in August 2020, the researcher wrote that Facebook has “known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups. In the meantime, the fringe group/set of beliefs has grown to national prominence with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream.”

In the second example, on Nov. 9, 2020, a researcher alerted colleagues to skyrocketing amounts of conspiracy content promoting Trump’s claims that the election was fraudulent. Instead of stepping up its efforts, three employees told the paper, Facebook execs continued to relax measures like limiting the spread of fringe right-wing pages. On internal message boards, furious employees argued that Facebook had been warned about about widespread incitement in the lead-up to the Jan. 6 riots at the Capitol but had failed to shut down the innumerable “Stop the Steal” groups which remained active on the site until then. Subsequent internal reports highlighted a continual pattern at Facebook where harmful content wasn’t challenged until after it had gained widespread traction, that it played too soft with claims of electoral fraud phrased to sound as reasonable concerns, and that it failed to prevent spam-inviting tactics that inflated the Stop the Steal movement’s size on the site.

In each of the cases studied by the Times, Facebook executives either ignored the problem or failed to do anything effective about it. Employees were torn as to whether Facebook was unable to control the problem or simply turned a blind eye to avoid offending the MAGA crowd.

Rampant use of multiple accounts

Facebook has been well aware for years that a relatively small number of people are able to spread vitriolic disinformation and violent content by using multiple accounts to spam, and it’s done very little about the problem, according to documents reviewed by Politico.

According to Politico, Facebook’s internal label for this kind of user is “Single User Multiple Accounts” (SUMA), and the documents show that the site has yet to mount any kind of coherent response despite research from March 2018 showing SUMAs reached about 11 million views daily (about 14% of its U.S. political audience). These users often used their real name on each account, meaning they weren’t violating Facebook “fake account” rules. While many SUMAs are harmless, the Facebook equivalent of Finstas, others are able to avoid violating rules against spamming by switching accounts to continue the flood of content.

Former Facebook director of public policy Katie Harbath told Politico that while the company could crack down on SUMAs that post extreme political rhetoric, “there was a strong push from other parts of the company that actions needed to be justified and clearly explained as a violation of rules” and that executives lacked the “stomach for blunt actions” that could result in complaints.

An internal post from 2021 estimated that up to 40% of new signups were SUMAs, Politico wrote. The post stated that Facebook’s algorithm both undercounted and underestimated the influence of SUMAs on the site.

“It’s not a revelation that we study duplicate accounts, and this snapshot of information doesn’t tell the full story,” a Facebook spokesperson wrote to Gizmodo. “Nothing in this story changes the estimate of duplicate accounts we disclose in our public filings, which includes new users, or that we provide context on in our ad products, ad interfaces, in our help centres, and in other places. Ultimately, advertisers use Facebook because they see results — we help them meet their business objectives and provide appropriate metrics in our reporting tools.”

Facebook staff warned the site was fuelling ethnic conflict abroad

Facebook’s absentee landlord problem abroad — in which it rolls into a foreign market, fails to understand local conditions or hire adequate levels of staff, and then flails or looks the other way when results in a flood of hate speech — has been well documented, such as its role in the Myanmar genocide. Reports by the Washington Post, the New York Times, CNN, and other outlets further detail the negative impact of Facebook in countries including India and Ethopia, where for many users Facebook is the de facto internet.

In India, the Times wrote, dozens of reports and memos detail the company’s failure to stem hate speech and celebrations of violence. One involved the creation of a test account with the location set to Kerala, India in February 2019 which followed every content and group recommendation generated by Facebook’s algorithms. It quickly devolved into vitriol, including anti-Pakistan posts rife with violent imagery. The researcher wrote, “I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total.” A March 2021 report showed that the problem persisted and Facebook was “replete with inflammatory and misleading anti-Muslim content,” according to the Times. The problem was particularly with accounts linked to Rashtriya Swayamsevak Sangh, a Hindu nationalist group tied to the ruling right-wing BJP party.

According to the Post, a 2020 internal summary showed that 84% of Facebook’s budget for fighting misinformation targeted the U.S., while 16% was for “Rest of World.” One document showed that Facebook has not yet created algorithms capable of detecting hate speech in Hindi or Bengali (some of the most spoken languages in the world), while another reiterated problems with spammers using multiple accounts to spread Islamophobic messages.

According to CNN, a Facebook team released a report in March 2021 calling attention to “Coordinated Social Harm” in Ethiopia, warning that armed groups were advocating harm against minorities in the “context of civil war.” The March report contains sections focusing on fighting between Ethiopian government forces and the Tigray People’s Liberation Front (TPLF), particularly a militia group called the Fano which often allies with the government and maintained a network of Facebook accounts for fundraising, propaganda, and ethnic incitement. The Facebook team recommended the network’s deletion but warned, “Current mitigation strategies are not enough.” Researcher Berhan Taye told CNN content moderation in Ethiopia was highly reliant on volunteers for human rights groups which Facebook delegates “dirty work” to.

“Over the past two years we have actively focused and invested in Ethiopia, adding more staff with local expertise, operational resources and additional review capacity to expand the number of local languages we support to include Amharic, Oromo, Somali and Tigrinya,” a Facebook spokesperson wrote to Gizmodo. “… We’ve invested significantly in technology to find hate speech in various languages, including Hindi and Bengali.”

“… We have dedicated teams working to stop abuse on our platform in countries where there is heightened risk of conflict and violence,” the spokesperson added. “We also have global teams with native speakers reviewing content in over 70 languages along with experts in humanitarian and human rights issues. They’ve made progress tackling difficult challenges — such as evolving hate speech terms — and built new ways for us to respond quickly to issues when they arise. We know these challenges are real and we are proud of the work we’ve done to date.”

Facebook knows it’s not doing nearly enough to fight human trafficking

Internal documents show that Facebook has been well aware of the extent of human trafficking and the “domestic servant” trade across the site since at least 2018, CNN reported. While the company scrambled to address the problem after Apple threatened to remove its products from its iOS App Store in 2019, the network reported it remains trivially easy to find accounts advertising humans for sale. Using search terms found in the Facebook report, CNN wrote it “located active Instagram accounts purporting to offer domestic workers for sale, similar to accounts that Facebook researchers had flagged and removed” previously. One of them listed women available for purchase by “their age, height, weight, length of available contract and other personal information,” CNN wrote.

One November 2019 document, detailing Facebook’s response after the Apple threat, stated that the company “formed […] a large working group operating around the clock to develop and implement our response strategy.” The report also states, “Was this issue known to Facbeook before the BBC enquiry and Apple escalation? Yes.”

Facebook only expanded its policies on “Human Exploitation” to ban content on domestic servitude related to recruitment, facilitation, and exploitation in May 2019, according to CNN. In September 2019, one internal report detailed a trans-national human trafficking group that used hundreds of fake accounts on Facebook apps and services (including Instagram) to facilitate the sale of at least 20 potential victims, and which had spent $US152,000 (AU$202,829) on ads. Facebook took action to remove the network.

According to CNN, the problem persists. One January 2020 document stated “our platform enables all three stages of the human exploitation lifecycle (recruitment, facilitation, exploitation) via complex real-world networks.” A February 2021 document focused on the Philippines warned Facebook lacks “robust proactive detection methods … of Domestic Servitude in English and Tagalog to prevent recruitment,” and that detection capabilities weren’t turned on for stories. The Associated Press confirmed that similar searches for the word “khadima,” meaning “maid” in Arabic, bring up numerous posts for African and South Asian women for sale.

“We’ve been combatting human trafficking on our platform for many years and our goal remains to prevent anyone who seeks to exploit others from having a home on our platform,” a Facebook spokesperson wrote to Gizmodo.

Young people are abandoning Facebook in droves

Other news outlets published stories focusing on internal Facebook documents showing that the site’s popularity is crashing with young people. According to the Verge, earlier this year a researcher showed colleagues statistics showing that U.S. teen users dropped by 13% in 2019 and were likely to drop by 45% over the next two years, while U.S. users between the ages of 20 and 30 had dropped 4%. The researcher predicted that if “increasingly fewer teens are choosing Facebook as they grow older” than the network’s ageing-up problem could be far more “severe” than it realised.

The Verge wrote that Facebook researchers showed Chris Cox, Facebook’s chief product officer, an alarming presentation of “health scorecards” earlier this year:

“Most young adults perceive Facebook as a place for people in their 40s and 50s,” according to the presentation. “Young adults perceive content as boring, misleading, and negative. They often have to get past irrelevant content to get to what matters.” It added that they “have a wide range of negative associations with Facebook including privacy concerns, impact to their wellbeing, along with low awareness of relevant services.”

The data showed that account registrations for users under 18 were down 26% from the previous year in the app’s top five countries, the Verge wrote, and that engagement was flatlining or dropping among young people. People older than 30 were also spending significantly more time on the site per day on average (an additional 24 minutes). While Instagram was faring better, the researchers wrote they were likely losing “total share of time” to competitor TikTok.

According to Bloomberg, Facebook executives have been very quiet on the issue, which poses an existential threat to the future value of a company now valued at near a trillion dollars. One of the complaints filed by Haugen to the SEC claims that Facebook “misrepresented core metrics to investors and advertisers” for years by excluding stats showing slowdowns in demographics like young people, as well as exaggerated overall user growth by failing to distinguish SUMAs in growth reports.

Employees are outraged that Facebook execs don’t act on, or worse, try to shut down their findings

Many of the reports focused on what appears to be widespread outrage among Facebook staff that the company is choosing profits over addressing these issues.

According to Politico, in December 2020, one employee complained in an internal post about turnover on safety teams: “It’s not normal for a large number of people in the ‘make the site safe’ team to leave saying, ‘hey, we’re actively making the world worse FYI.’ Every time this gets raised it gets shrugged off with ‘hey people change jobs all the time’ but this is NOT normal.” The same month, another wrote, “In multiple cases, the final judgment about whether a prominent post violates a certain written policy are made by senior executives, sometimes Mark Zuckerberg. If our decisions are intended to be an application of a written policy then it’s unclear why executives would be consulted.”

“Facebook’s content policy decisions are routinely influenced by political considerations,” another employee wrote in a post announcing their departure that month, according to Politico. “In particular we avoid antagonizing powerful political players. There are many cases of this happening.”

Politico separately reported that many employees were fed up with intervention from Facebook’s lobbying and government relations team, headed by former Republican political operative Joel Kaplan, which they said routinely overruled other staff on policy decisions. In a December 2020 report, one data scientist wrote that “The standard protocol for enforcement and policy involves consulting Public Policy on any significant changes, and their input regularly protects powerful constituencies… Public policy typically are interested in the impact on politicians and political media, and they commonly veto launches which have significant negative impacts on politically sensitive actors.”

Other documents detail that Kaplan’s team oversaw XCheck, a program that flagged certain accounts as above rules applying to others, and regularly moved to protect right-wing celebrities from penalizations to their accounts. Kaplan’s team also oversees all content rules, whereas Politico noted policy and safety teams are independent divisions at competitors like Twitter and Google. According to Bloomberg, other documents show that Facebook’s integrity team was routinely dispirited by discoveries such as that downranking some harmful content by 90% failed to stop its promotion; members of that team were frustrated by Facebook brass, such as the policy team, constantly intervening to shut down or limit their initiatives.

The data scientist’s report noted that executives like Zuckerberg often make key moderation decisions and that this only makes sense if there was “unwritten aspect to our policies, namely to protect sensitive constituencies.” Another article by the Washington Post detailed that employees are increasingly frustrated by Zuckerberg’s micromanagement, including his laser focus on growth metrics and a hardline approach to free speech that brought him into conflict with the integrity division.

“Our very existence is fundamentally opposed to the goals of the company, the goals of Mark Zuckerberg,” one integrity staffer who quit told the Post. “And it made it so we had to justify our existence when other teams didn’t.”


Editor’s Note: Release dates within this article are based in the U.S., but will be updated with local Australian dates as soon as we know more.