Since the beginning of the internet, online harassment has been a problem. We created this big, beautiful digital landscape that lets people be completely unfiltered and we all do different things with this freedom. I, for example, use my platform to make sex memes and lightly neg Silicon Valley billionaires.
Others take this opportunity to become the most scary-arse, shitbag, bigoted versions of themselves, hiding behind the comfy anonymity of their computer screens, facing no real consequences for threatening to rape, kill and torture people.
Since it looks not great for social media companies to allow this on their platforms -- not addressing bad behaviour condones it, according to the liberal lamestream media -- companies are now scrambling to deal with their massive abuse problems.
So on Friday, Instagram announced its plan to let users filter their comment streams, which came as no surprise. The Facebook-owned, image-sharing network plans to let each user build their own "banned words list" to filter out comments on their posts, a smart move that is mindful of the reality that everybody's definition of harassment is different.
The Washington Post reported that users will also have the ability to turn off comments for individual posts, while The Verge claims that Instagram has yet to decide whether it will let all users disable comments.
This comes on the heels of some recent high profile harassment incidents on Twitter, the nerdy debate team president to Instagram's hot popular girl. In the past weeks -- and months and years -- the microblogging network has taken a lot of flack for the way it deals with abuse.
Actress Leslie Jones quit the network after being the victim of a horribly racist campaign, initiated, in part, by Breitbart "tech" blogger Milo Yiannopolous. After receiving a flood of bad press about all this, Twitter fired back by banning Yiannopolous from its site.
The Verge wrote that "Instagram is building the anti-harassment tools Twitter won't." While Instagram's new plan is clever -- the easiest way to make your users happy is to give them the choice to use it however the fuck they want, which is what the new plan appears to do -- there is a reason Twitter has a more difficult time dealing with these issues.
Instagram is photo-sharing network. Its purpose is to allow users to brag about their brunches with bae (fucking murder me), cute dogs, fire memes, and selfies with friends. Comments are a secondary feature on the platform, so allowing each person to be in complete control of how people respond to their posts makes sense. At its core, Instagram is pictures. Without a comments section, the network would still flourish.
Instituting the same strict comment moderation policies on Twitter isn't a possibility because Twitter is the comments section. "Twitter in its very structure creates a flawed kind of level playing field," writes Davey Alba at Wired.
The network assigns the same inherent value to a tweet and a reply. Turning off or filtering out words you don't like for your replies would defeat the purpose of Twitter: screaming your hot takes and dumb jokes into a chaotic void. Some voices are inevitably louder than others, because of a sexy blue checkmark or lots of followers.
But really, it's about what ultimate purpose these networks serve and why people decide to use them. People log onto to Twitter to express their worldview, to make jokes and vent with other users about both the personal and the political. Instagram is for posting pictures. That's why this will work for them.
There is no "one size fits all" method to fixing online abuse. While every major social network should be working hard to combat the harassment their platforms facilitate, at the end of the day, social media serves as an echo chamber for people's shitty and offensive takes. The internet is full of harassment because the world is full of harassment.
Filtering out offensive phrases in the comments section is placing a little bit of gauze onto an open infected wound: it will help the bleeding, but it won't cure shit.