One of the more frequent annoyances of our modern world is the endless defences social media platforms make to sidestep their responsibilities to curb the myriad problems that arise solely because they exist. Every so often, one of them will pat themselves on the back for doing the Absolute Bare Minimum to amend an ongoing problem.
This week, Twitter, arguably among the top-ranking hell sites on the internet, announced it was updating its rules to better combat â€œhateful content.â€ It said in a Tuesday blog post that this would begin with the removal of reported tweets containing â€œlanguage that dehumanises others on the basis of religion.â€
Civil rights groups quickly pointed out that itâ€™s not going to fix the site, but hey, anything to de-platform hate groups or sentiment at this point is welcome.
This move is just one drop in a bucket of changes that Twitter and other sites have announced in recent months as social media giants struggle to right their respective ships, but itâ€™s one of a laundry list of changes weâ€™ve needed on social media far before the problem escalated. Though youâ€™d certainly be forgiven for missing any of the numerous â€œWeâ€™re Tryingâ€ announcements from some of the most powerful companies in the world.
So below, for your reference, an extensive but not necessarily exhaustive cornucopia of recently announced changes that should have been implemented from the get-go.
Twitter has struggled with its slow crawl toward making its website anything other than a source of illimitable despair for years now.
During Jack Dorseyâ€™s bizarre, self-flagellating press tour earlier this year in which he mostly answered questions about why heâ€™s unable to fix his busted-arse website, the Twitter CEO gave himself a C grade for his own performance and acknowledged that â€œweâ€™ve put most of the burden on the victims of abuse (thatâ€™s a huge fail).â€
The site has made some changes recently, however, in addition to its new policy to remove a few more morsels of shitty tweets from its buffet of terrible content.
Flagging misleading health information: In mid-May, Twitter added a feature to flag legitimate public health information to people who search for some keywords related to vaccine information. Given what the pervasive culture of misinformation spreads on its own and platforms around this topic, it sure would seem to have been wise to do this sooner – and yet…
Giving any thought whatsoever to whether hate belongs on its platform: Twitter recently decided to research whether it should allow white supremacists on its site. Company executive Vijaya Gadde told Motherboard at the end of May that the company thinks â€œcounter-speech and conversation are a force for good, and they can act as a basis for de-radicalisation, and weâ€™ve seen that happen on other platforms, anecdotally.â€
Labelling problematic content in the name of â€˜public interestâ€™: In the interest of protecting â€œthe health of the public conversationâ€ on its platform, Twitter said it would add a special warning to tweets that may violate its rules but have been permitted to remain on the platform ostensibly because they serve â€œthe publicâ€™s interest.â€
More likely this is because Twitter understands that it is a platform that serves a robust culture of trolls.
Moderating self-harm content: Instagram in February cracked down on content around self-harm, including any graphic images of such activity, and no longer displays non-graphic self-harm content in search or the explore tab. Good! Too late, but good!
Testing removing likes: Minor compared to some of the others on this list, but one I believe is in the best interest of its users: Instagram announced in April that it would begin a test to hide likes on images from those other than the account holder, which Head of Instagram Adam Mosseri said during F8 he hoped would encourage users to â€œspend a bit more time connecting with the people that they care about.â€ Endless scrolling still enabled though, so good luck with that bit.
Suspending hate figures: Instagram, along with its parent company Facebook, finally caved and permanently suspended a handful of conspiracy theorists and political wingnuts, with a company spokesperson citing at the time its policy against hate speech or that which might incite violence.
â€œWeâ€™ve always banned individuals or organisations that promote or engage in violence and hate, regardless of ideology,â€ a Facebook spokesperson told Gizmodo in an email in May. â€œThe process for evaluating potential violators is extensive and it is what led us to our decision to remove these accounts today.â€ Thatâ€™s right, it already had a policy against this and it still didnâ€™t ban these toxic users until a couple of months ago.
Doing literally anything about fake health information: Facebook announced during a press conference in early May that it was taking a more hardline approach to tackling the spread of vaccine-related misinformation on its platform by limiting their reach, though some content opposing vaccines may still be allowed to remain.
Tackling Instagram bullying: In July, Instagram rolled out a new effort to combat bullying with a two-part approach: by flagging potentially hurtful or offensive comments before a user posts them, and by allowing users to â€œRestrictâ€ the ability someone elseâ€™s comments and hide them from public view on their photos.
Read receipts in DM and activity status will also be inaccessible for restricted users. Honestly, if shaming and muting work, weâ€™re not going to complain.
Mark Zuckerberg seemingly oscillates between defensiveness and, uh, more defensiveness in perpetuity. But in an attempt to feign a flicker of accountability, Facebook continues to announce initiatives to fix its platform, presumably to counter the impression that itâ€™s slowly eroding democracy.
Banning explicit white nationalist and white separatist content: Yes, really. That only happened this year.
Suspending hate figures: This finally conspiracy horseshit these hate-mongering lizard people peddled to their massive followings on the platform.
Pivoting to â€œprivacyâ€: Mark Zuckerberg announced Facebookâ€™s pivot to â€œprivacyâ€ this year, a concept antithetical to the morallyÂ reprehensibleÂ conduct Facebook engaged in for years, not the least of which includes innumerable privacy and security fuck-ups.
Fixing its busted comments algorithm: Facebook announced in June an update to its ranking system for public comments with an emphasis on surfacing â€œsafe and authentic commentsâ€ and other so-called â€œintegrity signalsâ€ in order to â€œaddress the integrity of information and improve the quality of comments people see.â€ Whatever helps you sleep at night, I guess.
Working to prevent involvement in genocide: Facebook also in June claimed itâ€™s working on the real world violence its platform is largely responsible for inciting. Brave!
Demoting fake health information: Facebook recently announced the bold and courageous move to â€œminimiseâ€ the reach of misleading health information in News Feed but not ban it outright. You know, just in case you still want to find the latest â€œmiracle cure.â€
Stop helping the pedophiles: YouTube, a Google-owned property, has bungled the problem of managing child exploitation on its platform in such an extraordinary way that its advertisers distanced themselves from the platform.
In response to the blowback, YouTube announced new initiatives in February to â€œprotect minorsâ€ – including disabling comments on videos of minors, restricting live streams, and â€œreducing recommendationsâ€ – after the site was exposed as a hotbed for pedophiles.
Managing its hate speech issue, but only kind of: YouTube announced in June that it was taking a â€œharder lookâ€ at the widespread problem of hate speech on its platform, saying at the time that it was â€œdetermined to evolve our policies, and continue to hold our creators and ourselves to a higher standard.â€
It also began to ban explicitly pro-Nazi videos and others that claim that one group is superior to another â€œin order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.â€ Hey, it only took YouTube 14 years to start kicking off the Nazis!
Zapping the anti-vaxxers: Plagued by anti-vaccination misinformation, Pinterest put a lid on the whole goddamn topic by banning any vaccine-related search results. And honestly? Seeing as no one else can manage to get this problem under control on their platforms to date, maybe that wasnâ€™t such a bad idea.
As is likely obvious at this juncture, social media at large is arguably broken beyond repair to the extent that in most cases, the only adequate response may be to pull the plug on the whole damn operation.
But as far as profit goes, itâ€™s not always in the interest of tech giants to try to weed out the bad seeds in favour of truly creating something close to an amicable environment, despite it being well within their power to do so, as evidenced by the above list, and certainly in spite of whatever noise powerful executives make about their hands being tied.
In short: Everything is on fire most of the time. My advice? Log off and call your mum.