In the winter of 2010, a 19-year-old Moroccan man named Kacem Ghazzali logged into his email to find a message from Facebook informing him that a group he had created just a few days prior had been removed from the platform without explanation. The group, entitled â€œJeunes pour la sÃ©paration entre Religion et Enseignementâ€ (or â€œYouth for the separation of religion and educationâ€), was an attempt by Ghazzali to organise with other secularist youth in the pious North African kingdom, but it was quickly thwarted. When Ghazzali wrote to Facebook to complain about the censorship, he found his personal profile taken down as well.
Back then, there was no appeals system, but after I wrote about the story, Ghazzali was able to get his accounts back. Others havenâ€™t been so lucky.
In the years since, Iâ€™ve heard from hundreds of activists, artists, and average folks who found their social media posts or accounts deleted – sometimes for violating some arcane proprietary rule, sometimes at the order of a government or court, other times for no discernible reason at all.
The architects of Silicon Valleyâ€™s big social media platforms never imagined theyâ€™d someday be the global speech police. And yet, as their market share and global user bases have increased over the years, thatâ€™s exactly what theyâ€™ve become.
Today, the number of people who tweet is nearly the population of the United States. About a quarter of the internetâ€™s total users watch YouTube videos, and nearly one-third of the entire world uses Facebook.
Regardless of the intent of their founders, none of these platforms were ever merely a means of connecting people. From their early days, they fulfilled greater needs. They are the newspaper, the marketplace, the television. They are the billboard, the community newsletter, and the town square.
And yet, they are corporations, with their own speech rights and ability to set the rules as they like – rules that more often than not reflect the beliefs, however misguided, of their founders. Mark Zuckerberg has long professed beliefs that representing oneself through more than one identity indicates a lack of integrity, and that conversations held under oneâ€™s real name are more civil – despite overwhelming evidence to the contrary.
As such, Facebook users are forced to use their â€œauthentic identityâ€ – a name found on some form of written ID, regardless of whether it puts them in danger, or at risk of exposing a piece of themselves that could put them in harmâ€™s way. It prevents youth from exploring their sexuality freely for fear of being outed; people with chronic illnesses from engaging with support groups out of concern that insurance companies or employers might learn of their plight; and activists living under repressive regimes from organising online.
In some instances, it is a combination of personal beliefs and other factors that leads to seemingly arbitrary policies. On Facebook, Instagram, and YouTube, men may appear shirtless, but womenâ€™s nipples are considered pornographic.
This is in part a reflection of U.S. societal norms and traditions, but these companies could have chosen a different path, one reflected in their foundersâ€™ supposed belief in freedom of expression. They could have recognised that their global user base might have different views about womenâ€™s bodies. But instead they chose to stick with the patriarchal norm, and other companies followed.
In their early days of the social internet, these companies grappled with their newfound responsibilities by consulting with academics, non-profits, and think tanks, particularly those with a civil liberties bent, about difficult policy decisions. When the nascent Syrian uprising turned into a civil war in 2012, YouTube talked frankly with NGOs in an effort to find a policy solution that would allow videos containing graphic violence to remain online.
In the end, the company agreed that as long as the videos were captioned with sufficient context, they could stay. Contrast that to today: Groups such as Syrian Archive that are trying to archive and preserve videos emerging from Syria for use as evidence are engaged in battle with YouTube, as the evidence keeps disappearing.
And then there are the cases in which companies say their hands are tied; that is, when a foreign government comes knocking with a court order for an account or post to be taken down. Depending on the government, the order could be for anything from Holocaust denial to insulting the countryâ€™s ruler, but with increasingly few exceptions, as long as the order matches the law, the content will be removed or locally blocked.
In Turkey, videos insulting Ataturk, the countryâ€™s modern founder, used to trigger government bans on YouTube. Now, such content is simply removed. And in Thailand, Facebook regularly removes any posts that could offend the royal family.
Companies will act as though they didnâ€™t have a choice in the matter, but itâ€™s important to remember that their motivation is money. When, for example, the UAE tightened its cybercrime law, adding vaguely worded provisions establishing prison terms for anyone who endangers national security or â€œthe higher interests of the State,â€ companies might have reconsidered having their offices there.
The choice to pull out of a given country is always on the table, but choosing it means losing a market, something companies justify by telling themselves that people are better off with a censored version of their product than without the product at all. Of course, for the people, it also means losing direct access to the platformâ€”a problem easily solved, in most cases, by using a VPN.
In recent years, companies have faced pressure from governments and the public to â€œdo moreâ€ about hateful speech and extremism. But doing more has all too often meant applying blunt tools to a nuanced problem: Words like â€œdykeâ€ or the Burmese â€œkalarâ€ (meaning â€œforeignerâ€ or â€œIndianâ€ but sometimes used derogatorily) trigger censorship regardless of how theyâ€™re used. Any association, no matter how remote, to what a company deems a â€œdangerousâ€ group, can result in account deletion.
In looking for simple solutions to complex problems, we as a society have further deputised the unelected leaders of these corporations to filter our speech, and placed the burden of that filtering on workers largely based in the global South… and for what? No amount of censorship has ever cured societal ills.
And now, as the next billion are set to come online, they will encounter a sanitised corporate internet, one very different from what existed before these companies dominated the landscape. For those coming from repressive societies, it may still mean more freedom of expression and access to information than whatâ€™s available offline, but for them, and for those living under democratic rule, the window of acceptable expression continues to narrow.
Furthermore, new regulations in Europe and elsewhere rely on companies to enforce existing laws, deputizing them to make rapid decisions about the legality of content and fining them when they fail, providing no incentive for companies to err on the side of free expression.
By delegating the rulemaking to profit-driven companies, and enforcement to under-resourced laborers halfway around the world, weâ€™ve effectively decided that a diverse range of expression is no longer worth fighting for.
Rather than looking to corporate censorship to solve our problems, we should be investigating holistic solutions that deal with hate, terror, and other societal ills at their roots.
Any limitations to free expression should be in line with international human rights norms. And rather than applying sophisticated tools like image recognition to content moderation, companies should be making those tools available to users, so that they can filter out any content theyâ€™d rather not see, be it naked bodies or, say, pictures of snakes.
Censorship will always be inconsistent at scale, and that inconsistency will only intensify as more rules are applied. Right now, on social media platforms, censorship has become the norm, and free expression the exception. Letâ€™s reverse that trend.
Jillian C. York is a writer and activist based in Berlin whose work examines the impact of technology on our societal and cultural values. She is the Director for International Freedom of Expression at the Electronic Frontier Foundation.