Google Censors Gorillas Rather Than Risk Them Being Mislabelled As Black People, But Who Does That Help?

Two years ago, the object-recognition algorithm fuelling Google Images told a black software engineer, Jacky Alciné, his friends were gorillas. Given the long, racist history of white people claiming the people of the African diaspora are primates instead of human beings, Alciné was predictably upset. As was his employer: Google.

As a result, Google censored the words "gorilla," "chimp," "chimpanzee," and "monkey" from Google Lens, essentially "blinding" the algorithm and stripping each word from Lens' internal lexicon. Two years later, the words remain censored. Wired tested 40,000 photos on Google Photos, a standalone app that uses AI to group together similar images. After uploading photos of the animals, Wired found they couldn't be retrieved by using the banned words, only by using adjacent terms like "zoo," "jungle," or "baboon." Google reportedly confirmed the censored terms; we've reached out to the company for more information.

Google's move to censor Photos shows how corporations, even those at the forefront of AI advancements, have no schema for holding its own algorithms accountable. Consider Facebook's tepid apologies after its own algorithms surfaced "jew hunter" as a valid form of employment last summer. Or WeChat's apology after its Chinese translation services offered the "n-word" as a valid English translation for a black woman late for work. Or Nikon, whose face-detection software in 2010 thought East Asian people were blinking.

Public outrage, especially online, is capricious, unreliable, and hyperbolic. But, companies bend because boycotts and backlashes scare away investors and advertisers.

Therein lies the hypocrisy: Companies rely on AI to cut down costs and drive innovation, but are paralysed when algorithms behave as humans do by showcasing biases. Algorithms abet racism and Silicon Valley fixes the source of the outrage - censoring Lens, retooling translation software - but not the system engineered for secrecy instead of transparency and apologies instead of accountability.

Academics in mathematics, computer, and social sciences are pioneering a new field of interdisciplinary research looking at the real-world impacts of AI. The field isn't sexy enough for a buzzword just yet, but it's called alternately "fairness in machine learning," "algorithmic accountability," or "algorithmic fairness." The main thrust is to push companies to formalise real enforcement mechanisms for holding AI accountable.

Because when AI is pushed as the complex logic of advanced machines, it obscures the human hands guiding it and suddenly, conveniently, there's no person to blame when something goes wrong. A faceless mathematical equation did the deed and a faceless PR rep offers the apology.

So what should Google do? Google should offer more insight into how it's refining its software to weed out bias, and broadly, the company needs to be upfront about the still-developing nature of the field of object recognition. We don't even know if its recognition software has improved in differentiating people from animals. Algorithmic justice is still only a fragmentary lens, but, like algorithms, it evolves the more we use it.

[Wired]

WATCH MORE: Tech News


Comments

    "Consider Facebook's tepid apologies after its own algorithms surfaced "jew hunter""

    The link you've posted doesn't seem correct, leads to an unrelated story

    Public outrage, especially online, is capricious, unreliable, and hyperbolic.
    You don't say?

    Algorithms aren't inherently racist unless explicitly programmed that way - they have no sense of morality, hate, or any other emotion. It just tries to match things based on whatever rules the neural network has learned. Accusations of racism are ridiculous. It's just comparatively primitive software making an error.

      Exactly. Surely this is a case where the better approach would be to tune the algorithm to work more accurately. Rather than completely screwing it up. Next thing we'll see that people can't find pictures of biscuits because someone blocked the term "Cracker" and it just snowballs till it's useless.

      However, it would be interesting to know *why* google's AI thought that black people were gorillas. How is it making that leap? Was there faulty data put in, was it deliberately done, is it machine learning and it's picked up racist comments and used them?

        it's because they do look like gorillas. why is that such a bad thing? hasn't anyone ever laughed at a celebrity comparison where a celeb looks like a proboscis monkey or a sloth?

        should Michael Clarke Duncan have turned down his paycheck for planet of the apes?

        people need to get over this stuff, it is holding us back. I am mixed race and do not care if google says my skin is coffee colored, ok?

    Who edits the headlines? "then" instead of "than"?

    Google Censors Gorillas Rather Then Risk Them Being Mislabelled As Black People, But Who Does That Help?

    *than

      So are you gonna contribute anything to the actual discussion or are you just gonna continue with your head up your own rectum?

Join the discussion!