Google Censors Gorillas Rather Than Risk Them Being Mislabelled As Black People, But Who Does That Help?

Google Censors Gorillas Rather Than Risk Them Being Mislabelled As Black People, But Who Does That Help?

Two years ago, the object-recognition algorithm fuelling Google Images told a black software engineer, Jacky Alciné, his friends were gorillas. Given the long, racist history of white people claiming the people of the African diaspora are primates instead of human beings, Alciné was predictably upset. As was his employer: Google.

As a result, Google censored the words “gorilla,” “chimp,” “chimpanzee,” and “monkey” from Google Lens, essentially “blinding” the algorithm and stripping each word from Lens’ internal lexicon. Two years later, the words remain censored. Wired tested 40,000 photos on Google Photos, a standalone app that uses AI to group together similar images. After uploading photos of the animals, Wired found they couldn’t be retrieved by using the banned words, only by using adjacent terms like “zoo,” “jungle,” or “baboon.” Google reportedly confirmed the censored terms; we’ve reached out to the company for more information.

Google’s move to censor Photos shows how corporations, even those at the forefront of AI advancements, have no schema for holding its own algorithms accountable. Consider Facebook’s tepid apologies after its own algorithms surfaced “jew hunter” as a valid form of employment last summer. Or WeChat’s apology after its Chinese translation services offered the “n-word” as a valid English translation for a black woman late for work. Or Nikon, whose face-detection software in 2010 thought East Asian people were blinking.

Public outrage, especially online, is capricious, unreliable, and hyperbolic. But, companies bend because boycotts and backlashes scare away investors and advertisers.

Therein lies the hypocrisy: Companies rely on AI to cut down costs and drive innovation, but are paralysed when algorithms behave as humans do by showcasing biases. Algorithms abet racism and Silicon Valley fixes the source of the outrage – censoring Lens, retooling translation software – but not the system engineered for secrecy instead of transparency and apologies instead of accountability.

Academics in mathematics, computer, and social sciences are pioneering a new field of interdisciplinary research looking at the real-world impacts of AI. The field isn’t sexy enough for a buzzword just yet, but it’s called alternately “fairness in machine learning,” “algorithmic accountability,” or “algorithmic fairness.” The main thrust is to push companies to formalise real enforcement mechanisms for holding AI accountable.

Because when AI is pushed as the complex logic of advanced machines, it obscures the human hands guiding it and suddenly, conveniently, there’s no person to blame when something goes wrong. A faceless mathematical equation did the deed and a faceless PR rep offers the apology.

So what should Google do? Google should offer more insight into how it’s refining its software to weed out bias, and broadly, the company needs to be upfront about the still-developing nature of the field of object recognition. We don’t even know if its recognition software has improved in differentiating people from animals. Algorithmic justice is still only a fragmentary lens, but, like algorithms, it evolves the more we use it.

[Wired]


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.