What Apple, Google, Meta, Microsoft, TikTok and Twitter Said in Their Aussie Misinformation Transparency Reports

What Apple, Google, Meta, Microsoft, TikTok and Twitter Said in Their Aussie Misinformation Transparency Reports

In February 2021, Google, Microsoft, Tik-Tok, Twitter, Facebook (now Meta) and Redbubble signed onto the Digital Industry Group’s (DIGI) voluntary code of practice, which is aimed at combating the spread of misinformation and disinformation in Australia. Since launch, the code has seen two further signatories in Adobe and Apple.

Under the Australian Code of Practice on Disinformation and Misinformation, signatories have committed to safeguards to protect against online disinformation and misinformation, including publishing and implementing policies on their approach, and providing a way for their users to report content that may violate those policies. You can read more about the code here.

Part of the commitment to the code is publishing transparency reports on the work done by each company on their respective platforms.

Today, each of the signatories had their transparency reports published by DIGI. Each of the tech giants gave themselves a pat on the back for doing good things, so we’re ignoring that. We dive into what Meta’s report showed over here, and TikTok, too, but we thought it was worth summarising some statistics from the rest of the companies. In alphabetical order, let’s start with Adobe.

Adobe

As Adobe isn’t a social media platform, what it’s doing to prevent misinformation is a little interesting. One initiative sees ‘content credentials’ embedded from the capture of the image, as well as its editing and its publishing, in order to build the necessary trust in the provenance of the content. Content with credentials will specify any edits and changes made to it, and trust in any content without credentials can be interpreted and assessed based on that lack of information.

There was no specific Australian initiatives or stats to share, but Adobe said it’s committed to the code.

You can read Adobe’s transparency report here.

Apple

Apple doesn’t exactly have a social media platform either, but it does have a news aggregation service, Apple News, which is the avenue for the iPhone maker’s interest in this space. Apple News gathers “professional news organisations” (you can find Gizmodo Australia there if you’re yet to use the service). Actually, it uses humans to curate these. That curation takes selections from various sources, with various points of view. It also has a COVID hub that collected (and still does) accurate information around the ongoing pandemic. The hub received 3.2 million total views in September 2021, 3.5 million in October 2021, 2.7 million in November 2021 and 482,000 in January 2022.

You can still report something if Apple’s got it wrong. In 2021, Apple News readers worldwide reported approximately 655,000 concerns on article content or with technical issues. Articles produced by Australian publishers that were actioned for misinformation/disinformation in 2021 accounted for less than one onehundredth of one percent of total article views in the Australian Apple News app.

You can read Apple’s transparency report here.

Google

While Google is also not a social media site per se, it does organise the world’s information. It also has that little video-hosting service, YouTube.

In 2021, more than 25 million YouTube videos were removed globally for violating its Community Guidelines; more than 90,000 YouTube videos uploaded from IP addresses in Australia that violated the guidelines were removed. Another 700,000-something YouTube videos were removed globally as they were flagged as content related to dangerous or misleading COVID-19 information. Over 5,000 of these were uploaded from IP addresses in Australia.

On the advertising side of things, Google said it blocked or removed 3.4 billion ‘bad ads’ for policy violations. It also said over 657,000 creatives were blocked from Australia-based advertisers for violating the company’s misrepresentation ads policies (misleading, clickbait, unacceptable business practices, etc.)

On the Search side of things, Google listed the procedures it has in place where a developing story is searched. Users in this scenario are prompted with a notice indicating that it may be best to check back later when more information from a wider range of sources might be available. It also said it regularly fact-checks news articles.

You can read Google’s transparency report here.

Meta

Over 180,000 pieces of content were removed last year from Facebook and Instagram Pages or accounts specific to Australia for violating Meta’s Community Standards in relation to harmful health misinformation.

The 180,000 was an increase from 110,000 in 2020. Meta said Australians benefitted from the content it removed from other countries as well, which when talking globally, this number tipped 11 million.

And with an information hub dedicated to COVID-19 on its platforms, Meta said in the fourth quarter of 2021 (October, November, December), over 3.5 million visits were from Australian users. Since the beginning of the pandemic to June 2021, Meta has removed over 3,000 accounts, pages and groups for repeatedly violating its rules against spreading COVID-19 and vaccine misinformation.

You can read Meta’s transparency report here.

Microsoft

In its report, Microsoft said from January to June 2021, LinkedIn globally blocked more than 15 million fake accounts and removed more than 147,000 pieces of misinformation. Over the same period, LinkedIn blocked approximately 120,000 fake accounts attributed to Australia and removed 2,149 pieces of misinformation reported, posted or shared by Australian members. 54,883 fake Aussie accounts were stopped at registration, another 64,642 were restricted prior to reports being received and further 1,281 were restricted after members reported them.

Microsoft Advertising, meanwhile, took down more than 3 billion ads globally for various policy violations, almost twice as many as in 2020. It also introduced advertiser identity verification in seven markets, including Australia, to ensure customers see ads from trusted sources by requiring selected advertisers to establish their identity as a business or as an individual.

Microsoft Start (the company’s news aggregation service) was introduced in September 2020 and launched in Australia in May 2021. Since then, Microsoft removed 3,353 comments regarding COVID misinformation, another 265 about QAnon and a further 21 about Russia/Ukraine. There were 190,000 Australian takedowns in total.

Still on takedowns, Microsoft removed 2,956 Australian ads.

You can read Microsoft’s transparency report here.

Redbubble

If you’re unfamiliar, Redbubble is an artist marketplace. There isn’t too much in Redbubble’s report, but it said its Content Safety Team proactively screens the marketplace on a daily basis and removes content that it considers to include disinformation or misinformation related to known topics and issues.

You can read Redbubble’s transparency report here.

TikTok

We dive into TikTok’s report a little deeper here, but a stand out stat is that in total, from January 2021 through to December 2021, TikTok removed 12,582 videos that were deemed ‘Australian medical misinformation’. In total, TikTok labelled 198,721 Australian videos with a COVID-19 notice. 42,792 in August alone. Interestingly, @NSWHealth had 16,323,677 views on its 121 videos posted on TikTok.

You can read TikTok’s transparency report here.

Twitter

Twitter said 39,607 Australian accounts were actioned for violations of the Twitter Rules and a further 7,851 Australian accounts were suspended for violations of those rules. 51,394 pieces of content authored by Australian accounts were also removed for violations of the Twitter Rules. For violating Twitter’s COVID-19 policy, 817 Australian accounts were actioned and 35 Australian accounts were suspended for violations of the policy. In total, 1,028 pieces of content authored by Australian accounts were removed for violations of the policy. Six Australian accounts were also actioned for violations of the civic integrity policy, with six pieces of content authored by Australian accounts also removed for violations of the civic integrity policy.

You can read Twitter’s transparency report here.

Making things mandatory

As this code is voluntary, the Morrison government earlier this year said it would be introducing legislation to combat harmful disinformation and misinformation online. When the code was launched, then-Communications Minister Paul Fletcher said it was the government’s intentions to watch the voluntary code of practice…carefully.

“We’ve made it plain that we will review the performance, the Australian Communications and Media Authority will report to me in the middle of the year on performance, we’ve also made it plain if we don’t see that code working, we’ll certainly consider other measures,” Fletcher said at the time.

“[The] government will be watching carefully to see whether this voluntary code is effective in providing safeguards against the serious harms that arise from the spread of disinformation and misinformation on digital platforms.”

The legislation would provide the ACMA with new regulatory powers to hold big tech companies to account for harmful content on their platforms. At the time, the ACMA said it supported moving this code to a mandatory instrument, but we’re unsure if this will be a priority under the new Labor government.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.