Microsoft's Brad Smith Calls For 'Industrywide' Plan To Fight Extremist Content After Christchurch

Mourners at a March 19, 2019 vigil outside Al Noor mosque in Christchurch, New Zealand, which along with Linwood Islamic Center, was targeted by a white supremacist who killed 50 people on March 15, 2019. (Photo: Vincent Yu, AP)

Microsoft has called for the tech industry to set a uniform approach to violent, extremist content following the sickening massacre of Muslims attending mosque in Christchurch, New Zealand by a white supremacist earlier this month. 

At least 50 people were brutally murdered, scores of others were injured, and footage live-streamed on Facebook by the shooter went viral on a stomach-churning scale.

In a blog post on Sunday, Microsoft president Brad Smith wrote that “Words alone are not enough. Across the tech sector, we need to do more.

Especially for those of us who operate social networks or digital communications tools or platforms that were used to amplify the violence, it’s clear that we need to learn from and take new action based on what happened in Christchurch.”

Writing that Microsoft had already identified how its services were used to spread the video and the company had “identified improvements we can make and are moving promptly to implement them,” Smith added,

“Ultimately, we need to develop an industrywide approach that will be principled, comprehensive and effective. The best way to pursue this is to take new and concrete steps quickly in ways that build upon what already exists.”

While Smith acknowledged in the post that “no one yet has all the answers,” he did propose several concrete steps. First, he suggested improvements to hashing technology — a technique in which a specific file, such as a photo or video, is assigned a unique identifier that can be used to track copies of it uploaded elsewhere.

Smith wrote that while this system has been effective, platforms should be able to automatically catch edited versions of those videos. He also suggested that browsers should include safe search-like features to “to block the accessing of such content at the point when people attempt to view and download it.”

Smith added that companies that operate web platforms should agree on a category of “agreed ‘confirmed events’” which will trigger cooperation between companies, and called for the tech community to foster a “healthier online environment more broadly.”

Some of these ideas are likely to be controversial—and not solely within parts of the internet hostile to rules against hate speech, calls to violence, and extremist content. For one, hashing works by identifying specific pieces of content, which can be easily gamed by editing said content before upload.

Identifying edited versions of a video is trickier, and given that the automated systems that many large-scale platforms use to manage troves of content are a mess, one might be wary that such a system could purge relatively benign content by design or accident.

One theoretical example: A platform scrubbing footage of news broadcasts that incorporate clips of a terrorist attack. Politico media critic Jack Schafer, for example, recently compared the Christchurch video to footage of the September 11, 2001 attacks on the World Trade Center.

Schafer’s may not be an entirely coherent comparison; footage of the 9/11 attacks was recorded by innocents, while the Christchurch footage was explicitly intended as terroristic propaganda. But the line is not always clean cut as to what is ok to air, and one lesson of the past few years is that big tech companies are not immune to political pressure when making those decisions. (As an aside, another possibility is that such technology could be used to further draconian copyright enforcement measures.)

Conversely, Smith’s safe search-like proposal seems less problematic—browsers typically allow users to proceed to flagged content at their own risk. And by now, it’s apparent that his call for tech companies to take more responsibility for their platforms is warranted, given that they routinely claim their scale makes it impossible to moderate content... even as they rake in the profits.

A study last year showed that far-right and white supremacist YouTubers have capitalised on influencer techniques to spread far and wide. Facebook and Google have been criticised for spreading anti-vax conspiracy theories.

Hate speech runs rampant not only on places like 4chan and 8chan, but more mainstream web destinations like Reddit. Twitter verified vitriolic racists like Richard Spencer and only backtracked after intense media scrutiny (and people like notorious white supremacist and former Ku Klux Klan leader David Duke continue to tweet on a daily basis). 

As the New York Times editorial board wrote in an op-ed last year, there’s some evidence that the fundamental design of social media promotes toxic ideas.

There is obviously no easy answer to these questions! But on the flip side, that does not mean they are so hard as to warrant washing one’s hands of them. (In a recent MarketPlace Tech podcast, Harvard Kennedy School researcher and former Facebook privacy and public policy staffer Dipayan Ghosh suggested treating extremist content like junk mail.)

There isn’t a binary choice between a totalitarian internet and a sprawling cesspool. That’s probably what the worst people online want you to think.

[Microsoft via the Verge]

Trending Stories Right Now