Four million. That’s the number of pieces of content on Facebook that the platform claims it took action against for containing hate speech from January to March this year, according to its most recent transparency report. (And to put a fine point on it, that’s just the content it actually caught.) In a press briefing this afternoon, vice president of global operations Justin Osofsky teased a plan to pilot a subgroup of moderators who are specifically tasked with handling hate speech.
“We’re launching a pilot program where some reviewers specialize in hate speech,” Osofsky said. “Right now most of our reviewers look at content across the spectrum. By focusing on hate speech enforcement, these reviewers will establish a deeper understanding of how it manifests, and be able to make more accurate calls.”
Facebook clarified that rather than hire on new personnel for this effort, existing moderators would be moved over. “The pilot has already launched [...] with only a couple dozen reviewers. We need to start slow so as not to impact other areas of work and ensure we are doing this properly both in terms of process and support,” the spokesperson told Gizmodo. “Simultaneously we are thinking through how to provide any necessary support to these reviewers and whether that is limiting the amount of time, additional measures of support, or other means.”
Facebook recently announced increases to pay and benefits for its moderators following years of critical coverage around the working conditions of content reviews.
Facebook’s report indicates that the amount of hate speech being acted on has been growing steadily. Whether that’s a result of Facebook better enforcing its own rules or an uptick in this sort of content being posted is not clear from the numbers provided. In any case, those numbers are almost sure to spike in the company’s next transparency report: In March, the company finally decided include white nationalism and white separatism in its definition of hate speech.
While Facebook—and, hell, every other tech firm—touts AI as a catchall solution to the problem of moderating at massive scale, the transparency report reveals that the nuances of hate speech make it harder to act on proactively. While Facebook claims it’s been catching over 99 per cent of spam, terrorist propaganda, and child exploitation content on its platform before users flag it, the new transparency reports states that less than two-thirds of hate speech posts are similarly acted on before being reported by users. Hence, one assumes, the need for a specialised hate speech team.
Of course, the scale of Facebook makes it ripe for abuse, both of the hate speech and disinformation variety—and that’s not even mentioning the company’s own repeated failure to appropriately secure consumers’ data. Politicians like Elizabeth Warren and Bernie Sanders, as well as early Facebook figures Robert McNamee and Chris Hughes have all, as a result, called for the company to be broken up by antitrust laws—the very suggestion of which set Zuckerberg off on today’s call.
Claiming that the most pressing issues of the day would not be remedied by disentangling Facebook’s various products and business goals, the CEO half-bragged that, “the amount of capital we are able to invest in all the safety systems that go into what we’re talking about today [is] greater than the whole revenue of our company in the year before we went public, in 2012—just earlier this decade. In one decade the success of this company has allowed us to fund these efforts at a massive level. I think the amount of our budget that goes towards our safety systems is greater than Twitter’s whole revenue this year.”
“I really believe that the fight against harmful content is an incredibly important one,” Zuckerberg added. “We’re fully invested in this and we’ll continue to do even more and that’s kind of my view on this.”