In the wake of the terrorist attack in Christchurch, New Zealand last month, Australia is putting major pressure on Big Tech to prevent the spread of hateful and violent content on their platforms, with a new law that threatens major fines and imprisonment.
The law positions Australia at the extreme end of a growing push to police the digital gatekeepers.
On Thursday, Australia’s parliament passed legislation that carries penalties of up to 10 per cent of annual global net sales over the 12 months preceding an offence for the companies, and imprisonment of up to three years for company executives. It is unclear from the legislation which executives would be subjected to the law.
The new Sharing of Abhorrent Violent Material bill requires hosting services and content service providers to notify law enforcement and “expeditiously” remove content that depicts “abhorrent violent conduct,” which the bill defines as terrorist acts, murder, attempted murder, torture, rape, and kidnapping.
The law does not limit its scope to the executives of major corporations like Google or Facebook. Instead, those who run any site or service that fails to remove “abhorrent violent material” from their servers are also subject to potential fines or imprisonment under the law.
The bill was drafted in response to the horrific shooting at two mosques in Christchurch, during which the shooter live streamed his attack and white supremacist messages on Facebook. After the original livestream was removed, the video rapidly proliferated across the platform as Facebook moderators scrambled to remove hundreds of thousands of re-uploads.
The company says it automatically blocked 1.2 million versions while another 300,000 made it past their filters. Videos were also shared on Twitter, Reddit, and YouTube, all of which likewise struggled to contain it.
Australia’s attorney general, Christian Porter, told reporters on Thursday that Twitter and Facebook “should not be playing footage of murder,” according to The Guardian. “There are platforms such as YouTube, Twitter, and Facebook who do not seem to take their responsibility to not show the most abhorrently violent material seriously,” Porter told reporters.
The Guardian reports that Porter explained a jury would have to decide what constitutes an “expeditious” timeframe for removal of offending content, meaning it is not specifically defined by the law.
Porter added, “every Australian would agree it was totally unreasonable that it [the Christchurch video] should exist on their [Facebook’s] site for well over an hour without them taking any action whatsoever.”
Digital Industry Group Inc (DIGI), an Australian group that represents Twitter, Google, Facebook, and Amazon, among other tech companies, denounced the new law.
“This law, which was conceived and passed in five days without any meaningful consultation, does nothing to address hate speech, which was the fundamental motivation for the tragic Christchurch terrorist attacks,” DIGI managing director Sunita Bose said, in a statement.
Bose said that DIGI members try to remove abhorrent content as fast as possible. “But with the vast volumes of content uploaded to the internet every second, this is a highly complex problem that requires discussion with the technology industry, legal experts, the media and civil society to get the solution right,” Bose said.
“We have zero tolerance for terrorist content on our platforms,” a Google spokesperson told Gizmodo. “We are committed to leading the way in developing new technologies and standards for identifying and removing terrorist content.”
Facebook and Twitter directed Gizmodo to the statement from DIGI.
In an interview on Good Morning America that aired shortly after the bill was passed, Facebook CEO Mark Zuckerberg rejected calls made after the Christchurch attack to add a delay to livestreams, because it would “fundamentally break what livestreaming is for people.”