The European Union has made a litany of warnings to online platforms, urging them to better moderate their hellish platforms or else. But these have all been voluntary guidelines with the looming threat of enforcing “necessary legislation” if tech companies don’t shape up. This week, the EU’s executive body got a step closer to making good on that promise.
The European Commission submitted a proposal today expanding on its guideline from March that stated that tech companies should scrub any illegal content from their platforms within one hour of it being reported. The proposal focuses specifically on hampering the spread of terrorist content, again giving platforms one hour to take the illegal content down. At the moment it’s simply a proposal, requiring support from member states and the EU parliament to become legislation, according to TechCrunch.
“While cooperation under the EU Internet Forum should continue in the future, the voluntary arrangements have also shown their limitations,” the Commission stated in the proposal. “Firstly, not all affected hosting service providers have engaged in the Forum and secondly, the scale and pace of progress among hosting service providers as a whole is not sufficient to adequately address this problem.”
Earlier guidelines seemingly and explicitly targeted giants in the tech space, including Facebook, Twitter, YouTube, and Microsoft, which makes sense, given the dominance of their platforms. But the Commission’s latest proposal extends beyond just the big ones, stating that all hosting service providers that operate in the Union would be held accountable by this legislation, “regardless of their place of establishment or their size.”
The proposal states that while terrorist content should be quickly removed from platforms, companies should still maintain the data for six months in the event that it was mistakenly or wrongfully scrubbed so that they can reinstate it. The Commission also proposes that aside from just expeditiously removing dangerous content from their platforms, tech companies should also be taking proactive measures to support the takedown process, citing automated tools.
Automated detection has, of course, already been implemented by many of the most powerful tech companies. Facebook and Google don’t hesitate to throw algorithms at their online harassment and misinformation problems. With regards to terrorist content, Facebook last year stated that it was using both artificial intelligence and human moderators to flag extremist content. Google also developed an algorithmic tool to detect and stop ISIS recruits on its platforms. But as we’ve seen, machines like these are not immune to screw-ups.
If the Commission’s proposal did become law, tech giants would face some pretty hefty consequences if they fail to comply. According to the proposal, companies that don’t adhere to the one-hour regulation would be fined up to four per cent of their annual global turnover, BBC reports.
It’s easy to see why an institution like the European Commission would want companies to better police their services—nefarious behaviours still run rampant online, and as we’ve seen, there are real-life consequences to not preventing bad actors from proliferating on a platform. Tech companies’ moderation is a disaster, sure, but there are implications—on everything from user privacy to small platform viability—that need to be considered when proposing such sweeping regulation.