“There’s been a general feeling from the platform companies of kind of playing rope-a-dope with the Congress,” U.S. Senator Ed Markey told a small audience gathered in the U.S. Federal Election Commission’s headquarters around 9am Tuesday morning in America. Four hours later, Markey’s well-informed inference was proven true yet again when Facebook trotted out a new blog post titled “Combating Hate and Extremism,”
It’s surely no coincidence the image-troubled social giant is scheduled on Wednesday — along with representatives from Twitter and Google — to testify at the grimly titled hearing “Mass Violence, Extremism and Digital Responsibility” before the Senate’s Commerce, Science and Transportation Committee (of which Markey is a member.)
Specifically, Facebook is expected to answer for its failure to act during the Christchurch, New Zealand, shooting in March. While the shooter murdered 51 worshippers in cold blood, the footage was streaming live to the platform, then copied and reuploaded millions of times.
“The video of the attack in Christchurch did not prompt our automatic detection systems because we did not have enough content depicting first-person footage of violent events to effectively train our machine learning technology,” the company claims, in an unusual not-enough-mass-murders-have-trained-the-robots-yet defence. “That’s why we’re working with government and law enforcement officials in the US and UK to obtain camera footage from their firearms training programs.”
Given that Facebook prefaces this public relations chaff with the caveat that “some of the updates we’re sharing today were implemented in the last few months, while others went into effect last year but haven’t been widely discussed,” it’s unclear how low long this AI-training initiative has been in place. Or, for that matter, which American law enforcement officials Facebook is referring to, specifically.
In an apparent attempt to further rebut likely questions over the platform’s lack of action tomorrow, Facebook also updated its “Dangerous Individuals and Organisations” policy, specifically its definition of terrorists.
Here’s what that looked like in July:
And here’s now:
The only major change is the addition of “advocates,” which surely will be enough to fool the U.S. Senate into believing Facebook’s foremost priority is social safety instead of the ruthless acquisition of profit. This is to say nothing of the total absence of meaningful details in the post regarding how Facebook polices the groups it considers terrorists, as Countering Crime’s Eileen Carey pointed out:
Facebook claims to remove 99% of terrorist content from Al Qaeda, Isis and their affiliates. For transparency, who are their affiliates? What terrorist organizations are being proactively monitored? Where are the enforcement metrics against terrorist propaganda on Instagram? pic.twitter.com/FmabQtdoMK
— Eileen Carey (@eileenmcarey) September 17, 2019
To these exact sorts of piddling deflections and the overall lack of collaboration, Markey this morning concluded his remarks by telling platform companies that “at the end of the day frankly I think that’s going to be a huge business mistake — because we’re not going away.” He added, “we are, candidly, one significant event away […] from potentially Congress overreacting, because the next event could really be extraordinarily dramatic.”
Given that Facebook has shown itself to be a devious pit of snakes time and time again, you have to appreciate Markey’s unbounded optimism that it could at least strive to the bare minimum of America’s favourite contradiction in terms: corporate responsibility.