France just scored what it’s characterising as a major victory in the battle to curb online hate speech – one that potentially carries significant implications for privacy and free speech online.
After several meetings between Facebook CEO Mark Zuckerberg and French President Emmanuel Macron, the social media giant has agreed to give French courts identification data of users suspected of hate speech, according to Reuters.
This is a world first, and a big concession on Facebook’s part. Up until now, Facebook has given French courts identifying data in a limited number of circumstances. Namely, it’s given IP addresses and identifying data in cases of terrorism and violent acts, but only to French judges who ordered such data.
Reuters reports Facebook had been hesitant about handing over identifying data with regard to hate speech for two reasons. First, there was no legal reason to do so under American and French laws. Second, it was worried countries without independent judiciaries would abuse that information, reports Reuters.
We’ve reached out to Facebook for comment and will update when we hear back. (Update below.) The company refused Reuter’s request for comment.
While Macron will get most of the credit, you can trace his stance on hate speech regulation and fake news to France’s minister for digital affairs, Cedric O. O was nominated as minister in March, and since then he has been busy making hate speech a top priority for the French government. “This is huge news, it means that the judicial process will be able to run normally,” O told Reuters. “It’s really very important, they’re only doing it for France.”
So no, Facebook won’t be handing over ID data for hate speech cases in the U.S. – the First Amendment provides strong free speech protections and hate speech has no legal definition under U.S. law. (France’s laws on hate speech are, in contrast, far more restrictive.)
The agreement with France is definitely precedent-setting in a public-pressure sense, however, and you can bet other platforms, including YouTube and Twitter, will be watching how this all shakes out. Right now, France is also debating a law that would fine tech companies up to 4 per cent of their global revenue if they’re not quick to remove hate speech.
To say most social platforms have botched hate speech moderation would be an understatement. This month, YouTube came under fire for refusing to remove an anti-gay vlogger’s videos, hastily tried to make up for it by banning Nazis, and then bungled its hate speech policy messaging.
Last year, Twitter introduced a new ban on ‘dehumanising speech,’ but a Twitter executive also just said in an interview that what many users find abusive doesn’t necessarily violate its hate speech policies. Twitter still has yet to figure out its Nazi problem.
For its part, Facebook last month teased a pilot program of human moderators to zero-in on hate speech on the platform.
Though it claimed artificial intelligence was good enough to catch 99 per cent of spam, terrorist propaganda, and child exploitation content before users reported it, less than two-thirds of user-reported hate speech on the platform received proper treatment. (That said, it’s still easy to find hate groups on Facebook.)
Update, 06/27/2019, 4:15am: Facebook provided the following statement.
“As a matter of course, we will no longer refer French law enforcement authorities to the Mutual Legal Assistance Treaty process to request basic information in criminal hate speech cases. However, as we do with all court orders for information, even in the US, we will scrutinize every order we receive and push back if is overbroad, inconsistent with human rights, or legally defective.”