Facebook Quietly Made A Ton Of Updates Hours After Trump Got Impeached

Facebook Quietly Made A Ton Of Updates Hours After Trump Got Impeached

There area few things Facebook’s proven, time and again, that it can’t get quite right—like keeping its user data under wraps, curbing partisan political ads, and, as proved this week, blogging. Truly though, you need to give props to the multibillion-dollar tech giant for perfecting the art of the well-timed news dump in the midst of a major political event. It turns out the impeachment was no exception.

In an email to Gizmodo, a Facebook spokesperson confirmed that on December 19—the day after Donald Trump became the third president in U.S. history to be ousted out of office with a late-night House vote—the company last updated its standards surrounding hate speech. And boy, what an update it was.

Here are a few of the “dehumanising comparisons” that Facebook users aren’t allowed to post anymore, per the update:

– Black people and apes or ape-like creatures

– Black people and farm equipment

– Jewish people and rats

– Muslim people and pigs

– Muslim person and sexual relations with goats or pigs

– Mexican people and worm like creatures

– Women as household objects or referring to women as property or ‘objects’

– Transgender or non-binary people referred to as ‘it’

“Statements denying existence” of these kinds of marginalised groups also received a blanket ban, meaning that statements like “trans people don’t exist” would also be struck down under these new guidelines. Naturally, all of these rules apply to all content—not just text—meaning that all those pesky memes would also be held accountable.

As with all of the company’s community standards, the consequences of posting these kinds of things, on paper, range from mild to severe:

The consequences for violating our Community Standards vary depending on the severity of the violation and the person’s history on the platform. For instance, we may warn someone for a first violation, but if they continue to violate our policies, we may restrict their ability to post on Facebook or disable their profile. We also may notify law enforcement when we believe there is a genuine risk of physical harm or a direct threat to public safety.

And again, as with the case with Facebook’s standards, it’s likely that these, in fact, were considered hate speech beforehand, and this is a way to make the impossible task of moderating a deluge of content easier to swallow—especially because comparing a race to a particular animal means something extremely different depending on what that race is. (Also, maybe just stop generalising about races on Facebook and everywhere else.)

Calling out language like the above shows just what people were getting away with—or trying to get away with—in the face of Facebook’s self-proclaimed AI prowess in stopping exactly that. Back in November, for example, the company boasted 7 million pieces of content flagged by its AI as a potential hate speech contender, and have previously teased the idea of forming a specific moderator coalition dedicated to the task.

These weren’t the only updates that snuck under the radar. In fact, per Facebook’s spokesperson, every change to the company’s community standards that happened this past December took place when impeachment was dominating everyone’s attention. And while some of these—like the company’s blanket ban on Census interference—made the newswires, there were bans on livestreaming capital punishment and mocking survivors of sexual abuse that were mysteriously absent. And in the absence of an RSS feed or any sort of notification system on the page of Community Standards updates, it’s likely that these changes were swept under the rug with barely anyone noticing.

Facebook declined to comment on whether these policies were publicised as much as the Census announcement—or whether they were publicised at all.

Aside from the hate speech updates, the company snuck in eight other changes to its community standards in the middle of one of the biggest stories of political hellfire in years.

  • The “Violence and Incitement” policy was expanded to ban “misinformation that contributes to the risk of imminent violence or physical harm,” (rather than just immediately contributing to that harm).

  • The “Coordinating Harm and Publicising Crime” standard expanded to ban Census fraud, rather than just voter fraud.

  • The “Fraud and Deception” standard expanded, now banning users from engaging with, promoting, or facilitating anything related to “fake or manipulated documents” like phony coupons or medical prescriptions. It did the same for “betting manipulation,” “fake fundraising campaigns,” and “debt relief or credit repair scam[s].” To top it off, recruiting a workforce to run these scams also got the ban.

  • The policies for “Sexual Exploitation of Adults” expanded to include “forced stripping,” atop the already banned content surrounding “non-consensual sexual touching, crushing, necrophilia or bestiality.” Mocking the victims of in any of those categories—or admitting to participating in it yourself—is verboten under the new ruleset.

    While sharing revenge porn violated the standards before, this update adds that threatening to share it and “stating an intent to share,” are both violations, as is “offering” these pictures, or asking for them at all.

    Also (finally) banned: upskirts.

  • The sections of the “Human Exploitation” category dealing with private citizens were amended to include “involuntary minor public figures.”

  • The policies on “Violent and Graphic Content” now ban livestreams or pictures of “capital punishment.”

  • The policy surrounding “Cruel and Insensitive” content elaborated a ban on “sadism towards animals:”

Imagery that depicts real animals that are visibly experiencing and being laughed at, made fun of, or contain sadistic remarks for any of the following (except staged animal vs. animal fights or animal fights in the wild):

-premature death

– serious physical injury (including mutilation)

– physical violence from a human

  • In a grim expansion of the “Memorialisation” policies, the company added that Facebook users who die by suicide can have living relatives ask that pictures of the weapon used or any content “related” to the death be removed from their profile photo, cover photo, and recent timeline posts. Family members of murdered Facebook users can have any pictures of the assailant (convicted or alleged) cut out, too.

    And just in case you were wondering:

For victims of murder, we will also remove the convicted or alleged murderer from the deceased’s profile if referenced in relationship status or among friends.

It’s worth noting that in general, the company’s not particularly shy about touting updates to its litany of community standards, even going as far as to put out a semi-regular report tackling how they’re tackling updates like these, and how well they’re being enforced.

But for updates like these—ones that don’t only show the holes being constantly punched in those automated systems, but also shine a light on some of the worst sides of Facebook’s roughly 250 million-strong U.S. user base—a news dump will do.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.