Microsoft’s Calling It Quits on Creepy Emotion Recognition Tech

Microsoft’s Calling It Quits on Creepy Emotion Recognition Tech

Microsoft’s turning its back on its scientifically suspect and ethically dubious emotion recognition technology. For now, at least.

In a major win for privacy advocates sounding the alarm on under-tested and invasive biometric technology, Microsoft announced it’s planning to retire its so-called “emotion recognition” detection systems from its Azure Face facial recognition services. The company will also phase out capabilities that attempt to use AI to infer identity attributes like gender and age.

Microsoft’s decision to pull the brakes on the controversial technology comes amid a larger overhaul of its ethics policies. Natasha Crampton, Microsoft’s Chief Responsible AI Officer, said the company’s reversal comes in response to experts who’ve cited a lack of consensus on the definition of “emotions,” and concerns of overgeneralisation in how AI systems might interpret those emotions.

“We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs,” Azure AI Principal Group Product Manager Sarah Bird said in a separate statement. “API access to capabilities that predict sensitive attributes also opens up a wide range of ways they can be misused — including subjecting people to stereotyping, discrimination, or unfair denial of services,” Bird added.

Bird said the company will move away from a general-purpose system in the Azure Face API that tries to measure these attributes in an effort to “mitigate risks.” Starting Tuesday, new Azure customers will no longer have access to this detection system though current customers will have until 2023 to discontinue their use. Crucially, while Microsoft says its API will no longer be available for general purpose use, Bird said the company may still explore the technology in certain limited use cases, particularly as a tool to support people with disabilities.

“Microsoft recognises these capabilities can be valuable when used for a set of controlled accessibility scenarios,” Bird added.

The course correction comes in an attempt to align Microsoft’s policies with its new 27-page Responsible AI Standard document a year in the making. Amongst other guidelines, the standard calls on Microsoft to ensure its products are subject to appropriate data governance, support informed human oversight and control, and “provide valid solutions for the problems they are designed to solve.”

Emotion recognition tech is “crude at best.”

In an interview with Gizmodo, Surveillance Technology Oversight Project Executive Director Albert Fox Cahn called it a “no-brainer” for Microsoft to turn its back on emotion recognition tech.

“The truth is that the technology is crude at best, only able to decipher a small subset of users at most.” Fox Cahn said. “But even if the technology were improved, it would still penalise anyone who’s neurodivergent. Like most behavioural AI, diversity is penalised, and those who think differently are treated as a danger.”

ACLU Senior Policy Analyst Jay Stanley welcomed Microsoft’s decision which he said reflects the “scientific disrepute” of automated emotion recognition.

“I hope this will help solidify a broader understanding that this technology is not something that should be relied on or deployed outside of experimental contexts,” Stanley said on a phone call with Gizmodo. “Microsoft is a household name and a big company and I hope that it has a broad effect in helping others understand the severe shortcomings of this technology.”

Tuesday’s announcement arrives on the heels of years of pressure from activists and academics who’ve spoken out against the potential ethical and privacy pitfalls of easily accessible emotion recognition. One of those critics, USC Annenberg Research Professor Kate Crawford dug into emotion recognition’s (also called “affect recognition”) limitations in her 2021 book Atlas of AI. Unlike facial recognition which attempts to identify a particular individual’s identity, emotion recognition seeks to “detect and classify emotions by analysing any face” — a pitch Crawford argues is fundamentally flawed.

“The difficulty in automating the connection between facial movements and basic emotional categories leads to the larger question of whether emotions can be adequately grouped into a small number of discrete categories at all,” Crawford writes. “There is the stubborn issue that facial expressions may indicate little about our honest interior states, as anyone who has smiled without feeling truly happy can confirm.”

Crawford isn’t alone. A 2019 report conducted by NYU research centre AI Now argued emotion recognition technology placed in the wrong hands could potentially let institutions make dystopian decisions around individuals’ fitness to participate in core aspects of society. The report’s authors called on regulators to ban the tech. More recently, a group of 27 digital rights groups wrote an open letter to Zoom CEO and founder Eric S. Yuan calling on him to scrap Zoom’s efforts to integrate emotion recognition in video calls.

Microsoft’s pivot on emotional intelligence comes almost exactly two years after it joined Amazon and IBM in banning police use of facial recognition. Since then, AI ethics teams at big tech firms like Google and Twitter have proliferated, though not without some heated tensions. While its possible Microsoft’s decision to back off of emotion recognition could save it from mirroring the same horrific public trust issues plaguing other tech firms, the company remains a major force of concern amongst privacy and civil liberty advocates due to its partnerships with law enforcement and eager interest in military contracts.

Microsoft’s decision was generally welcomed by privacy groups, but Fox Cahn told Gizmodo he wished Microsoft would take further action regarding its other, more profitable but equally as concerning technologies.

“While this is an important step, Microsoft still has a long way to go in cleaning up its civil rights track record,” Fox Cahn said. “The firm still profits from the Domain Awareness System, [an] Orwellian intelligence software built in partnership with the NYPD. The Domain Awareness System, and the AI policing systems it enables, raise the exact same concerns as emotion recognition, only the DAS is profitable.”


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.