We Finally Have Some Hard Data On How Much Twitter Sucks

We Finally Have Some Hard Data On How Much Twitter Sucks

Nine months after Amnesty International called on Twitter to be more transparent about abuse on its platform, the organisation has published another study indicating that—brace yourself — Twitter still has a damning online abuse problem, and it overwhelmingly affects women of colour.

While the findings are hardly surprising if you’ve even half-tuned into the platform’s discourse over the last few years, seeing the cold, hard numbers is a sobering reminder of Twitter’s hellish reality. The study, conducted in tandem with software company Element AI, found that black women were 84 per cent more likely than white women to be included in an abusive or problematic tweet.

“One in ten tweets mentioning black women was abusive or problematic,” Amnesty writes, “compared to one in fifteen for white women.”

To conduct the study, over 6,500 volunteers from 150 countries waded through 228,000 tweets sent to 778 women politicians and journalists across the U.S. and the United Kingdom last year. In total, researchers estimate that over the course of the year, a problematic or abusive tweet was sent to the woman volunteers every 30 seconds, on average.

“The report found that as a company, Twitter is failing in its responsibility to respect women’s rights online by failing to adequately investigate and respond to reports of violence and abuse in a transparent manner which leads many women to silence or censor themselves on the platform,” Amnesty writes.

The key findings from Amnesty’s so-called “Troll Patrol project” aren’t necessarily world-shattering — Twitter’s rampant toxicity has been a dark and slimy cornerstone of the service for years. But they add hard-earned data to the criticism that Twitter still can’t a handle on its worst users, which in turn is negatively impacting the most marginalized people on its platform.

According to the study, women of colour were 34 per cent more likely than white women to be mentioned in an abusive or problematic tweet. It also found that a total of 7 per cent of tweets mentioning women journalists were problematic or abusive, compared to 7.12 per cent of tweets mentioning politicians. And the tweets considered in this study don’t even include deleted tweets or ones from accounts Twitter suspended or disabled last year — likely the worst and most blatant examples.

“By crowdsourcing research, we were able to build up vital evidence in a fraction of the time it would take one Amnesty researcher, without losing the human judgment which is so essential when looking at context around tweets,” Milena Marin, Senior Advisor for Tactical Research at Amnesty International, said in a statement.

Amnesty defines abusive tweets as those which “include content that promotes violence against or threats to people based on their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease,” which can include “physical or sexual threats, wishes for the physical harm or death, reference to violent events, behaviour that incites fear or repeated slurs, epithets, racist and sexist tropes, or other content that degrades someone.”

Problematic tweets, on the other hand, “contain hurtful or hostile content, especially if repeated to an individual on multiple or occasions, but do not necessarily meet the threshold of abuse” and “can reinforce negative or harmful stereotypes against a group of individuals (e.g. negative stereotypes about a race or people who follow a certain religion).”

As Amnesty notes, abusive tweets violate Twitter’s policies but problematic ones are more nuanced and aren’t always in violation of the company’s policies. The organisation still decided to include them in this study because, Amnesty writes, “it is important to highlight the breadth and depth of toxicity on Twitter in its various forms and to recognise the cumulative effect that problematic content may have on the ability of women to freely expressing themselves on the platform.”

Vijaya Gadde, Twitter’s legal and policy lead, told Gizmodo in an email that Amnesty’s inclusion of problematic content in its report “warrants further discussion,” adding that “it is unclear” how the organisation defined or classified this content and whether it thinks Twitter should remove problematic content from its platform.

In its latest biannual transparency report, released last week, Twitter said it received reports on over 2.8 million “unique accounts” for abuse (“an attempt to harass, intimidate or silence someone else’s voice”), nearly 2.7 million accounts for “hateful” speech (tweets that “promote violence against or directly attack or threaten other people on the basis of their inclusion in a protected group”), and 1.35 million accounts for violent threats.

Of those, the company took action—which includes up to account suspension—on about 250,000 for abuse, 285,000 for hateful conduct, and just over 42,000 for violent threats.

Gadde said that Twitter “has publicly committed to improving the collective health, openness, and civility of public conversation on our service” and that the company is “committed to holding ourselves publicly accountable towards progress” when it comes to maintaining the “health” of conversations on its platform.

Twitter, Gadde also pointed out, uses both machine learning and human moderators to review online abuse reports. As we’ve seen clearly on other platforms like Facebook and Tumblr, artificial intelligence tools can lack the ability to understand the nuances and context of human language, making it an inadequate tool on its own to determine whether certain content is legitimately abusive. It is the details of these tools that Amnesty wants to Twitter to make more public.

“Troll Patrol isn’t about policing Twitter or forcing it to remove content,” Marin said. “We are asking it to be more transparent, and we hope that the findings from Troll Patrol will compel it to make that change. Crucially, Twitter must start being transparent about how exactly they are using machine learning to detect abuse, and publish technical information about the algorithms they rely on.”


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.