Facebook claims to be doing a lot to fight hate speech. But Facebook has also cashed nearly $2 million in ad money from organisations designated as hate groups by the Southern Poverty Law Centre between May 2018 and September 17, 2019, according to a Wednesday report by Sludge.
The SPLC is considered one of America’s most prominent civil rights watchdogs. It classified the 38 organisations in question as hate groups because they have “beliefs or practices that attack or malign an entire class of people, typically for their immutable characteristics.” (Many of the groups in question have vigorously contested those designations and insist they are being targeted simply for espousing conservative viewpoints, which is perhaps not the most persuasive argument in these times.)
At the top of the list is the Federation for American Immigration Reform (FAIR), which Facebook’s ad database shows ran 335 ads at a total bill of $1,347,857. (FAIR was founded by virulent nativist and white supremacist John Tanton and regularly gripes about topics like the changing “ethnic base” of the U.S., but has managed to maintain some degree of mainstream credibility with right-wing news outlets.)
Second was the Alliance Defending Freedom, an anti-LGBTQ Christian group that has pushed for the criminalisation of “sodomy” in the states and abroad, at $580,061.
Other groups on Sludge’s list of Facebook ad buyers included the homophobic Family Research Council ($158,334), the anti-Muslim Clarion Project ($81,473), and the ominously-titled Californians for Population Stabilisation ($299,475), an anti-immigrant group founded by eugenicist and far-right race “scientist” Garret Hardin. CAP once hired a neo-Nazi as its public affairs director.
Specific ads noted by Sludge included an ad by The American Vision, a group the SPLC writes has advocated the execution of gay people, which linked to a now-removed blog post calling gay people “evil.” William Gheen, the nativist head of Americans for Legal Immigration PAC, purchased ads parroting anti-immigration “invasion” rhetoric of the type cited by a mass shooter in El Paso, Texas this year and asking users to share a post stating “100% OF ILLEGAL ALIENS ARE CRIMINALS.”
Three local chapters of the Proud Boys, a far-right street brawling group that earned the attention of the FBI last year, also did comparatively small Facebook ad buys (some of which were eventually removed).
Sludge wrote that in total, Facebook ran some 4,921 ads from the 38 hate groups. Facebook has claimed that it is making progress and proactively identified 65 per cent of the hate speech it removed in Q1 2019, up from 24 per cent in Q4 2017. But the groups have been allowed to remain, Sludge argued, because the platform’s moderation efforts are “mainly focused on individual posts, not on the accounts that do the posting” and it only bans groups “that proclaim a violent mission or are engaged in violence”:
Facebook may take down a hate group’s post that explicitly attacks people based on a “protected characteristic,” but it wouldn’t ordinarily ban that group from its platform if the group didn’t have a mission Facebook considers violent. For example, it removed three pages of the Proud Boys, who advocate violence, but has let hate groups that are extremely discriminatory yet not explicitly violent remain. The contrasting definitions of hate speech and hate groups allow the company to take down some offensive posts but permit numerous hate groups to have a presence, posting, spending money, and recruiting on its platform.
In June, Facebook released a nearly 30-page audit prepared by its civil rights ambassador Laura Murphy and roughly 90 prominent civil rights groups. Multiple civil rights groups told Gizmodo that while the audit showed Facebook had made some progress, policy changes such as its decision to ban support of white supremacy or “nationalism” didn’t go far enough and the company had not laid out a proactive plan to fight the spread of hate speech.
Facebook’s much-toted machine learning algorithms for policing hate speech have also been regularly lambasted as inadequate.
For example, Auburn University senior fellow and GDELT co-creator Kalev Leetaru told Gizmodo that he thought Facebook could improve its automated moderation with existing technology, but “the reason platforms are reluctant to deploy it comes down to several factors”—including the cost of running more “computationally expensive” systems and the money generated from extreme content.
“Terrorism, hate speech, human trafficking, sexual assault and other horrific imagery actually benefits the sites monetarily,” Leetaru added. “… Other than a few high-publicity cases of advertiser backlash against particularly high profile cases, advertisers aren’t forcing the companies to do better, and governments aren’t putting any pressure on them, so they have little incentive to do better.”
Facebook has also admitted it failed to act appropriately against military officials in Myanmar inciting genocide against the minority Rohingya population. A United Nations investigator later harshly criticised Facebook’s subsequent efforts to do better and its efforts since have failed to inspire confidence.
According to Sludge, searches for similar content on competitors Google/YouTube, Twitter, and Snap showed that Twitter took $1,358,074 from FAIR since October 2018, while Google/YouTube took $133,290 from the group since the end of May 2018. “Few, if any” other hate groups appeared in the Google/YouTube political ad archive, while no other SPLC-designated groups appeared in Twitter or Snap’s databases, Sludge wrote. (However, as Sludge noted, Facebook’s ad archive is more comprehensive and accessible than the others’ database.)
Keegan Hankes, the interim research director of the SPLC’s Intelligence Project, told Sludge, “This is an astounding amount of money that’s been allowed to be spent by hate groups… It is a decades-long tactic of these organisations to dress up their rhetoric using euphemisms and using softer language to appeal to a wider audience. They’re not just going to come out with their most extreme ideological viewpoints.”
The organisations in question soft-pedal their Facebook content “knowing full well that people who are amenable to that message might very well go to their website or go to whatever propaganda they’re operating and get exposed to more extreme rhetoric,” Hankes added. He told Sludge that he believed Facebook only takes action when it is “politically expedient,” whereas anti-immigration, anti-Islam, and anti-LGBTQ viewpoints “have a lot of traction in mainstream conservatism right now.”
“We continually study trends in organised hate and hate speech, and work with partners to better understand how they evolve,” a Facebook spokesperson told Sludge. “We are reviewing the content flagged and taking action against any posts or ads that violate our policies.”