Why Do Facebook’s Algorithms Keep Abetting Racism?

Why Do Facebook’s Algorithms Keep Abetting Racism?

Call it algorithmic ignorance. Or maybe algorithmic idiocy. On Friday, Pro Publica uncovered that Facebook’s ad targeting system, which groups users together based on profile data, offered to sell ads targeting a demographic of Facebook users that self-reported as “Jew Haters”.

Mark Zuckerberg (Getty)

“Jew Haters” started trending on Twitter when the piece went viral, and by the end of the day Facebook announced it had removed “Jew Haters” and other similarly ranked groups from its advertising service by temporarily excluding its entire self-reported education and employer fields. “Facebook is removing all of these self-reported targeting fields until we have the right processes in place to help prevent this issue,” a spokesperson told Gizmodo. With the announcement, the company offered a predictably anodyne apology and explanation.

As Facebook explains, the categories were algorithmically determined based on what users themselves put into the Employee and Education slots. Enough people had listed their occupation as racist bile like “Jew Hater”, their employer as “Jew Killing Weekly Magazine”, or their field of study as “Threesome Rape” that Facebook’s algorithm, toothless by design, compiled them into targetable categories.

Facebook’s response is repetitious in emphasising that users themselves self-reported the data. But claiming ignorance of its own algorithms lets Facebook equivocate more obvious questions: What does it tell us about Facebook that Nazis can proudly self-identify on their platform? Why can’t Facebook’s algorithms determine that words such as “rape”, “bitch” or “kill” aren’t valid occupational terms? Facebook says its AI can detect hate speech from users — so why, seemingly, did Facebook choose not point its AI at the ad utility?

Despite a user base of two billion people, Facebook as a company has very few human faces. There’s COO Sandberg, CEO Zuckerberg and very few others. So when a company of this size — one this reliant on automation — makes as huge a mistake such as embedding antisemitism within its revenue schemes, there’s no one to blame. Even the apology is uncredited, with no human contact listed, save for the nameless press@fb.com boilerplate.

Zuckerberg and his cohorts made algorithmic decision-making the heart of its ad-targeting revenue scheme, and then enshrouded those systems in a black box. And as Facebook’s user base has grown, so have its blindspots.

Last year, lawyers filed a class action suit against Facebook over concerns that its ad-targeting scheme violated the US Civil Rights Act. In addition to self-reported ad targeting, Facebook also compiled data to place users into categories they may not even be aware of. In October, Pro Publica revealed that, based on data such as friend groups, location, likes and so on, Facebook put users into categories analogous to race, called an “ethnic affinity”.

Advertisers could then either target or exclude users based on their affinity, a grave concern in a country that outlaws denying people housing and employment based on their race. Facebook ended its “ethnic affinity” targeting after the backlash. Unlike with the “Jew Hater” debacle, where Facebook said it didn’t know what its algorithms were doing, here Facebook claimed it couldn’t foresee the disproportionate impact of its algorithms. Call that algorithmic idiocy.

Why do Facebook’s algorithms keep abetting racism? The more specific answer is hidden inside Facebook’s black box, but the broader answer may be: It’s profitable. Each Facebook user is a potential source of revenue for the company. And the more they use the site, the more ads they engage, the more shareable content they produce, and the more user insight they can generate for Facebook. When users reveal themselves as racist, antisemitic and so on, what obligation does Facebook have to remove them or frustrate its own revenue structure? Does removing or censoring users violate their first amendment rights in the US?

In both the original Pro Publica report and the follow-up from Slate, researchers have called for a public database of Facebook’s ad-targeting categories and a broader, de-automation push across the company. At this point, Facebook can no longer deny the sore need for an ethical and moralistic compass somewhere within its advertising business; the company’s algorithms and its racist and antisemitic controversies are linked. It’s time for an enormous paradigmatic shift towards accountability, out in the open, and not another tepid half step from Facebook within the comfort of its black box.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.