Mark Zuckerberg Is Deluded If He Thinks AI Can Solve Facebook’s Hate Speech Problem

Mark Zuckerberg Is Deluded If He Thinks AI Can Solve Facebook’s Hate Speech Problem

Artificial intelligence is all the rage these days, so naturally we want to throw it at all of our problems. Yesterday, Mark Zuckerberg’s comment that Facebook won’t have effective tools to filter hate speech for another “five to 10 years” seemed both dismissive and uninspiring. No doubt, it was classic Silicon Valley speak for “we haven’t got a clue,” but Zuck wasn’t being completely unreasonable.

While it’s true that AI can already filter hateful content on social media, the real challenge is getting it to recognise the many nuances involved – something that’s a distinctly human problem.

During yesterday’s joint hearing of the Senate Judiciary and Commerce committees, Republican Senator John Thune asked Zuckerberg how Facebook currently flags hate speech on the social media platform, and the various challenges involved. Zuckerberg said Facebook didn’t have the capacity to weed out this sort of content in the past, but recent advances in AI are finally making this possible. The CEO pointed to filters capable of identifying pro-ISIS and Al Qaeda content, along with automated tools that can tell when users are at risk of self-harm. But as for more sophisticated hate speech filters, he said, we’re going to have to wait.

“I’m optimistic that over a five-to-10-year period, we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging content for our systems, but today we’re not just there on that,” Zuckerberg told the committee. “Until we get it more automated, there’s a higher error rate than I’m happy with.”

Zuckerberg was cornered into producing a delivery date, but his response suggests it’s a solvable problem, even it it’s something we shouldn’t expect for a while. His comments aside, it remains an open question if Facebook, or any other company for that matter, is capable of developing a system that can consistently and effectively identify hate speech, and in such a way that everyone, or at least the vast majority of us, can agree with the results. The problem here is not so much the technology involved, but the problem of human nature.

Indeed, Facebook could actually build this tool today using current machine learning technology. All it would have to do is gather a sufficiently large dataset containing both acceptable and unacceptable forms of content, tag the inappropriate stuff, and train the AI to recognise the difference. Eventually, the system would be able to identify hate speech in content it has never encountered before. Piece of cake, right?

Not really.

First of all, who gets to tag the content? And who gets to decide what constitutes hate speech? Facebook? The government? Academics? To be clear, hate speech is a socially constructed concept that changes over time — especially as those who intend to persecute or diminish certain groups find more creative ways of doing it. But it’s not immediately obvious that we (whoever “we” are) can attain perfect consensus on what constitutes hate speech and what doesn’t. Facebook now has the daunting challenge of devising a system that can do what human communities have consistently failed to do. Perhaps it will be easy, and not too contentious, to flag overtly hateful content, such as blatant racism or homophobia. But what about more opaque expressions? When does criticism of vegans become hate speech? Is attacking socialised healthcare supporters (“fucking commies”) hate speech? What about aggressively confronting people who support the NRA?

It’s going to be increasingly difficult for Facebook to make all of its users feel happy or safe. Some users will always take issue with the hate filters for not being strong enough, or focused on certain subsets of potentially abusive users over others. The presence of hate speech is a social problem, not just a technological one. To think we can just throw AI at this problem and all will be resolved is intensely naive.

It’s also important to recognise that hate speech can be disguised. People will always find ways to get around the system, whether it be human- or machine-based. As a current example, crypto-fascists use rhetoric, metaphor, and tricks of language to make their content seem less… fascist. It has been alleged, for example, that the Smurfs are an example of crypto-fascism. This may be an exaggeration, but how in the hell is an AI expected to pick up on this sort of subtly when even humans can’t agree?

Conversely, there’s the problem of flagging hate speech that’s anything but. For some persecuted or marginalized groups or individuals, using “hateful” labels is way of taking ownership of derogatory terms that are often used against them. If an AI is to ever succeed in this realm, it will have to identify these subtleties, and also keep up with changing social norms and cultural mores, which may be difficult given the rapid pace of social change, but not an impossibility.

Sara Wachter-Boettcher, author of Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech, agrees that Facebook faces a challenge in trying to automate its hate speech filters. “If you look at the criteria Facebook has used in the past for its human-driven content moderation, you can see that the internal logic of who and what is protected speech is not simple, and is dependent on confusing, culturally dependent, and ever-shifting norms and definitions.”

Even if we had unlimited moderators working today, she said, or even a perfectly consistent and accurate AI (if such a thing could ever exist), we’d still have to sort out fundamental disagreements about why certain types of speech are permissible and others aren’t, where lines are drawn, and whose interpretation of the speech counts.

“If I am being threatened, for example, and it feels credible to me, but you look at it and say that it’s not, who’s right? Whose interpretation of the threat should the AI learn from? What does a ‘neutral’ interpretation of credible threat even mean?” asked Wachter-Boettcher. As to whether this technology is close to being ready or not is practically irrelevant in her estimation, as tech companies “are nowhere close to ready to answer these kinds of questions.”

“Even if we assume Facebook is willing and ready to put massive resources toward rectifying this underinvestment, it’d be a massive cultural shift at the company that would take years,” she told Gizmodo. “So until and unless I see that kind of thing coming from Facebook, in my mind, the technical piece is largely irrelevant, because it won’t solve the problem.”

Joanna J. Bryson, an Associate Professor in the Department of Computing at the University of Bath, says AI can already detect a lot of hate speech, but she believes it’s crazy – and even a bit irresponsible – to say this technology is just five years off.

“There’s a basic principle of the social sciences that as soon as you discover a measure it becomes no longer a valid measure, so you will never get a perfect filter,” Bryson told Gizmodo. “But you could certainly already have AI augmentation for human editors [i.e. humans working alongside AI], particularly if you had as much spare cash on hand as Facebook to pay the humans. The big problem is getting the sociologists to agree on a definition of hate speech, not then being able to detect that with AI, the latter would be relatively easy, again assuming you don’t expect perfection, which as you say you can never have.”

That said, Bryson says we shouldn’t take the fact that policing the controversial cases is impossible – and probably undesirable – to mean we can’t police the egregious examples.

“Ironically Facebook is itself a great example of this. The UK has a very fluffy law – well, the whole EU does – that you can’t use people’s data in a way they didn’t anticipate,” she said. “Who could ever enforce that law? Well, the UK’s Information Commissioner’s Office is pursuing a case against Cambridge Analytica’s use of Facebook ‘likes’ to alter users’ voting behaviour. Because that’s sufficiently egregiously not what the users expected to happen – and also important – that it’s worth pursuing. That’s the kind of calls Facebook could be making, and they probably owe that to their users.”

Sadly, Facebook will likely roll out some flimsy and largely ineffectual hate speech filter in the next few years to quell public anger. But it’s clear that Zuckerberg, with this social media monster that is Facebook, has bitten off far more than he can chew. He just refuses to admit it.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.