On Tuesday, the British government announced that it plans to release a new AI it claims can detect 94 per cent of ISIS propaganda videos with 99.995 per cent accuracy. The UK’s Home Office says that platforms can use its AI to scan videos as they’re being uploaded, detecting terrorist content and blocking it from ever appearing online.
The detection model announced Tuesday is targeted toward smaller hosting platforms like Vimeo and pCloud. As the press release notes, such platforms “are increasingly targeted by Daesh [another name for ISIS] and its supporters and they often do not have the same level of resources to develop technology.” Lack of resources compounds the problem of extremist content: small companies can’t summon an army of moderators, so extremist content is detected less, and more of it is uploaded because those sites are less moderated.
The Home Office hasn’t disclosed a date for release of the tool, but says officials will soon travel to Silicon Valley to meet with tech companies on how to combat terrorist content. The announcement says extremist content from terrorist organisations like ISIS and ISIL appeared on more than 400 platforms in 2017.
While British officials have floated the idea of an “extremism tax” to penalise major platforms like Facebook and YouTube that don’t remove terrorist content immediately, this detection model is seemingly the UK’s strategy for breaking the cycle. If the UK does implement fines similar to the ones Germany has targeting hate speech, smaller companies would theoretically be hurt more by infringements because they don’t have the same resources for content moderation.
Today’s announcement isn’t proposing anything mandatory or the immediate passing of new regulation, but Home Secretary Amber Rudd said the government has explored mandating pre-upload filtering.
“We’re not going to rule out taking legislative action if we need to do it,” she told the BBC on Tuesday, “but I remain convinced that the best way to take real action, to have the best outcomes, is to have an industry-led forum like the one we’ve got. This has to be in conjunction, though, of larger companies working with smaller companies.”
It’s important to note this tool isn’t a catch-all for detecting extremist content. This model was trained on Islamic State propaganda explicitly. White supremacist or anti-immigrant extremist content, for example, wouldn’t be picked up and platforms would still need to use a range of moderation techniques to detect and block banned content.
And the development of a state-sponsored content filter raises a number of questions, starting with the British government’s bold claim that the technology “can automatically detect 94% of Daesh propaganda with 99.995% accuracy.” How did they determine the tool’s accuracy and what’s its false positive rate? Even if less than 1 per cent of all uploads are erroneously blocked on a platform the size of YouTube or Facebook, that’s still potentially thousands of people being silenced by AI, a serious freedom of speech concern.
Things get even stickier when implementation is considered. Will users be notified about the use of this new class of state-supported algorithms? And, finally, there’s the question of culpability. If platforms use the UK’s AI, whose fault is it when it misses something?
Hopefully, the British government will answer some of these questions before the tool is rolled out. Either way, a government releasing its own content-blocking AI while openly considering greater regulation is sure to concern free speech advocates in the UK and around the world.