Civil Rights Groups Mostly Unimpressed By New Twitter Policy Against ‘Dehumanising’ Language

Civil Rights Groups Mostly Unimpressed By New Twitter Policy Against ‘Dehumanising’ Language

Civil rights organisations were, somehow, both pleased and exasperated with Twitter on Tuesday after the social network announced the latest update to its rules against “hateful conduct,” a change characterised by the New York Times, which first broke the news, as a “scaling back” of the company’s policymaking to focus solely on banning speech “insulting and unacceptable if directed at religious groups.”

The update itself focuses entirely on “dehumanising language,” i.e., the comparison of a religious group to “maggots,” “viruses,” “animals,” or “rats,” to cite several examples offered up by the company.

It boils down a single sentence added to its hateful conduct policy – “We also prohibit the dehumanisation of a group of people based on their religion” and is a result of months of public feedback, as well as conversations with “experts in the space,” Twitter said.

“It’s something that we’ve been thinking about for quite some time and figuring out how we can best create a solution for,” a Twitter spokesperson told Gizmodo by phone. “Whether its some kind of policy remediation or product solution, we wanted to make sure we have the most substantial impact.”

“It’s a starting point,” they added, “before we evolve.”

Several prominent civil rights groups said they were none too impressed with development. Every comment applauding the update as a “step in the right direction” was accompanied by a “but,” or a “however,” that ultimately cast the policy as either “too little” or “too late” to address a tide of abusive content aimed at threatening or intimidating users because of, for example, their religion, the colour of their skin, or their sexual orientation.

Muslim Advocates, a topic critic of Twitter’s moderation efforts, said that while the change was a positive one, it may be rendered toothless, depending on how Twitter chooses to enforce its new policy. The group, led by Farhana Khera, a former counsel to the U.S. Senate Judiciary Committee, further raised doubts over how Twitter would choose to define “dehumanization,” saying the precise meaning remained unclear.

The group pointed, for example, to a tweet by President Trump this April which included footage of Rep. Ilhan Omar, one of the first Muslim women to serve in Congress, interspliced with footage of the 9/11 terrorist attacks.

The purpose of the tweet quite clearly was to conjure up a spurious and xenophobic stereotype that all Muslims are terrorists. “Would this count as dehumanising someone based on their religion?” the group asked.

Twitter, speaking with Gizmodo on Tuesday, appeared to lean toward a very literal definition of the term – tweets that explicitly “reduce a person to less than human, whether it be animalistic or mechanistic,” it said.

With regard to whether a video reducing all Muslims to terrorists falls under the policy shift, the answer is no. While unwilling to comment on the president’s tweet specifically, a spokesperson said that was a “slightly different” situation.

“The difference here,” they said, “is the dehumanisation aspect.”

Twitter’s existing policy on hateful conduct does, in fact, prohibit tweets that assert “all [insert religious group] are terrorists.” But this appears to leave ample room for the use of dog-whistles, coded language and imagery, such as Trump’s video of Rep Omar, that enable users to convey their bigotry with some deniability.

Facebook, for instance, has a policy that prohibits praise or support for “white nationalism” and “white separatism.” But in order to violate it, one has to use those terms specifically, a glaring flaw that civil rights groups say makes impotent its entire enforcement process.

Asked if, in general, creating and tweeting a video of a prominent Muslim figure interspliced with, say, footage of an ISIS beheading, would violate its rules against “hateful conduct,” Twitter’s spokesperson replied: “A hypothetical is a little hard to comment on when it comes to individuals and their behaviours.” They added, however, that if a “wish of harm or a threat of violence” was included, that such a video would violate its rules “as they stand today.”

While the very act of conflating Muslims with terrorism is, to any rational observer, a “wish of harm,” it appears that to violate Twitter’s rules one would need to expressly spell out the threat.

The line is difficult to locate. While Trump’s tweet about Rep. Omar remains online, the president himself is exempt from punishment because, according to Twitter, anything he has to say falls under a “public interest” exemption.

“Twitter’s update is too simplistic for the complicated world we live in, and fails to address the nuanced intersections of its users’ identities,” said Rashad Robinson, president of Colour of Change, the nation’s largest online racial justice organisation.

Limiting its update to the “dehumanisation” of religious groups, he said, undercuts the company’s efforts to address the abundance of other hateful content directed at users for a variety of reasons, such as the colour of their skin.

The coalition Change the Terms, of which Colour of Change is a member, has created a definition of “hateful activities” it wishes social networks to adopt that includes not only threats of violence and harassment, but defamation targeting individuals or groups based on their “actual or perceived race, colour, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation, or disability.”

Jessica Gonzales, vice president of strategy and senior counsel at Free Press, another coalition member, said the upshot was that Twitter’s update was “good” but ultimately “not nearly enough,” as the company has so far refused to take action against white supremacists in general, even as other prominent social networks have moved in recent months to do so.

“People are being taken down who are protesting racism and people are staying up who are wildly racist and organising racist rallies using social media and using Twitter, in particular,” she said. “Twitter needs to do a wholesale reform of its content moderation policy. We can’t have this happen piecemeal. It’s offensive that they’re not going head-on after white supremacy, and we think they ought to.”

When asked whether Twitter is working on plans to ban white supremacist content, and, moreover, why it is not already banned under its current policies, the company declined to comment.

“It’s good that Twitter is seeking public comment as they’re developing their policy decisions and seeking input from external experts on hate, but hate and harassment on Twitter is a serious, longstanding problem,” a spokesperson for the Anti-Defamation League, an American Jewish organisation that fights bigotry, told Gizmodo.

However, the fact that language dehumanising others on the basis of religion only now violates Twitter’s rules, the group said, “shows how far they have to go to truly combat hate.”


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.