Can We Make Non-Racist Face Recognition?

Can We Make Non-Racist Face Recognition?

Illustration: Angelica Alzona

As companies race to employ facial recognition everywhere from major league ballparks to your local school and summer camp, we face tough questions about the technology’s potential to intensify racial bias; Commercial face recognition software has repeatedly been shown to be less accurate on people with darker skin, and civil rights advocates worry about the disturbingly targeted ways face-scanning can be used by police.

Nevertheless, these systems continue to roll out across the country amid assurances that more accurate algorithms are on the way. But is the implementation of truly non-racist (as opposed to just “colorblind”) face recognition really possible? To help answer this question, we talked to experts on face recognition, race, and surveillance, and asked them to consider if we could ever remedy the technical, cultural, and carceral biases of face recognition.

Technical biases and technical solutions

Earlier this year, MIT researchers Joy Buolamwini and Timnit Gebru highlighted one of the ways face recognition is biased against black people: darker skinned faces are underrepresented in the datasets used to train them, leaving facial recognition more inaccurate when looking at dark faces. The researchers found that when various face recognition algorithms were tasked with identifying gender, they miscategorized dark-skinned women as men up to a 34.7 per cent of the time. The maximum error rate for light-skinned males, on the other hand, was less than 1 per cent.

“To fail on one in three, in a commercial system, on something that’s been reduced to a binary classification task, you have to ask, would that have been permitted if those failure rates were in a different subgroup?” Buolamwini asked in an accompanying news release from MIT.

In the paper, Microsoft’s gender classifier had a 20.8 per cent error rate for dark-skinned women. In response, Microsoft announced in June it was recalibrating the training data through diversifying skin tones in facial training images, applauding itself for balancing the racial discrepancies in gender classification rates. This, however, only speaks to one kind of bias in face recognition.

“We’re talking about two separate and unique issues in our industry,” Brian Brackeen, CEO of AI startup Kairos, told Gizmodo. Technical biases, he explained, have technical solutions. But even fully functioning face recognition can abet biased systems, a problem requiring more culturally complex solutions. “Both are problems and both deserve attention, but they are two separate things.”

Kairos makes biometric login systems that can let bank customers use their face to check their accounts, employees clock into work, and people at amusement parks access fast-pass lanes. In these contexts, Brackeen says, the stakes of a false positive or a false negative are much lower. Being misidentified by your bank is the not the same as being misidentified by police.

“I’m much more comfortable selling face recognition to theme parks, cruise lines, or banks,” said Brackeen, ” if you have to log into your [bank] account twice because you’re African American, that’s unfair. But, you’re not gonna get shot.”

Brackeen, who jokingly identifies as “probably the only” black CEO of a face recognition company, entered the media spotlight last month when he revealed Kairos turned down a contract with body camera manufacturer Axon. According to Brackeen, face recognition exponentially enhances the capabilities of police, which, in turn, exponentially exacerbates the biases of policing.

“When you’re talking about an AI tool on a body camera, then these are extra-human abilities. Let’s say an officer can identify 30 images an hour,” said Brackeen. “If you were to ask a police department if they were willing to limit [recognition] to 30 recognitions an hour they would say no. Because it’s not really about the time of the officer. It’s really about a superhuman ability to identify people, which changes the social contract.”

Ultimately, Brackeen sees a vendor-end solution: In an editorial last month, he called for every single face recognition company to stop selling its tech to law enforcement agencies.

Fruit from a poisonous tree

Face recognition works by matching the person being scanned against a database of facial images. In policing contexts, these databases can include passport and driver’s licence photos or mugshots. In Orlando, police partnered with Amazon to test face recognition connected to surveillance cameras in public places. In New York, school districts have begun exploring similar systems to scan visitors’ faces after the Parkland shooting. In both cases, the goal is to instantaneously identify persons of interest, such as those with outstanding warrants.

This, however, assumes warrants are themselves distributed “fairly” or should always trigger police intervention. Consider Ferguson, Missouri, where the shooting death of Mike Brown sparked days of protests. A Justice Department investigation after Brown’s death found that Ferguson police were “shaped by the city’s focus on revenue rather than by public safety needs.” As the report explained, police routinely targeted black drivers for stops and searches as part of a racist, lucrative revenue model, issuing arrest warrants for missed and partial payments.

The numbers were staggering: Representing 67 per cent of the population in Ferguson, black citizens were the target of 85 per cent of traffic stops, and 91 per cent of all stops resulted in some form of citation. In a future where all drivers are instantly identifiable via face recognition, consider what life would be like for anyone instantaneously matched and identified with an outstanding arrest warrant as a result of a biased system. As face recognition becomes standardised and enters schools, stadiums, airports, and transit hubs, the surveillance powers of the police grow. Even with recalibrated training models, “bias” is present. One scholar we talked to argued bias-free face recognition could never exist in the policing system.

“[Face recognition] imagines policing as neutral. We know that’s not the case,” Simone Browne, an assistant professor at the University of Texas at Austin and the author of Dark Matters: On the Surveillance of Blackness, told Gizmodo. Dark Matters argues that biometric surveillance turns the body itself into a form of evidence, a form of hyper-objectification with historical connections to slavery. Browne writes:

Racializing surveillance is also part of the digital sphere with material consequences within and outside of it… data that is abstracted from, or produced about, individuals and groups is then profiled, circulated and traded within and between databases. Such data is often marked by gender, nationality, region, race, socioeconomic status and… for some, these categories are particularly prejudicial.

Browne argues that face recognition creates a digital copy of our physical selves that functions as an ID, which is then analysed, shared, scrutinised, matched against us — essentially trafficked — all as a means of verifying our identity and tracking our behaviour. Face recognition categorizes humans, thus becoming a vehicle for the sometimes prejudicial results of putting people into biometric categories. We can see the consequences of such categorization in gang databases, terror watch lists, and even preferred shopper lists.

“We can’t yet imagine that that’s going to improve things for black people, because the policing system is still intact,” Browne warned.

Who benefits from advances?

“We’re living in a moment of accelerated technology, accelerated technological development [and] scientific development,” Alondra Nelson, the director of Data & Society, which studies the social impacts of technology, told Gizmodo. “Moments of pause and reflection are necessary and, I think, important reminders that we don’t just have to be cogs in a quick moving system.”

Responding to Microsoft’s initial post on gender classification, Nelson was sceptical, tweeting at the time: “We must stop confusing ‘inclusion’ in more ‘diverse’ surveillance systems with justice and equality.”

“[Much] of my work has talked about the way that communities of colour in the African-American community understood how they could be both underserved by the sort of positive role of a particular new technology but often overexposed to its worst possible dynamic,” said Nelson.

This dual bind — where black people are subjected to science rather than supported by it — is encapsulated in the concept of “medical apartheid,” a term coined by author Harriet Washington. Born from Washington’s robust historical analysis of medical experimentations on slaves, “medical apartheid” refers to how black people have been experimented on for the sake of scientific advances from which they don’t benefit. One of the most infamous examples comes from the work of James Marion Sims, who is noted by some as the “father of gynecology” for reducing maternal death rates in the 19th century, but conducted research by performing gruesome experiments on enslaved black women.

“All of the early important reproductive health advances were devised by perfecting experiments on black women,” Washington said in a 2007 interview. “Why? Because white women could say no.” Centuries later, the maternal death rate for black women is three times higher than it is for white women.

Face recognition isn’t as dire, but “medical apartheid” is a useful framework for considering how different populations have different roles in the development, advancement, impact, and, ultimately, the benefit of scientific and technological breakthroughs. This disparity is illustrated with a simple question: Which populations can say no?

“This is not something only for [companies to ask,] it’s more about democratic governance,” said Nelson. “We need to be open to the democratic possibility that having better surveillance technology is not necessarily better.”

Outside of contexts like policing, biases (both technical and cultural) seem a lot less menacing. But the question remains: Can black people say no to being face scanned, even if it is statistically balanced, commercially applied, or fairly regulated? Like anyone, black people should be able to enjoy conveniences like shorter airport lines and easier log-ins. But when evaluating an emergent technology’s positive or negative effect on a society, we need to ask whether it has disparate impacts on members of that society, not just if it’s fun or inclusive.

Watching the watchmen

Earlier this month, Microsoft President Brad Smith issued a public (and widely reported) call for the U.S. government to regulate facial recognition after public backlash to his company’s ongoing contract with ICE. “As a general principle,” Smith wrote, “it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.”

Smith called for the creation of a “bipartisan expert commission” to guide the regulation of face recognition tech. It seemed like a PR ploy at first, not unlike the diversity panels of the Obama years or the newly fashionable AI ethics boards assembled with big names, high praise, and no enforcement powers. Smith’s proposal, however, featured one major difference: Federal commissions have the direct ear of members of Congress, who are bolder than ever in their desire to regulate the “liberal bastion” of Silicon Valley, and can issue subpoenas for documents and information usually obscured by proprietary protection laws. It’s an encouraging suggestion, but tackling the biases in face recognition requires a lot more.

To create “non-racist” face recognition, the companies selling it must, yes, address the technical flaws of their systems, but they will also have to exercise a moral imperative not to give the technology to groups operating with racial bias. Additionally, legislators would need to impose hard limits on how and when face-scanning can be used. Even then, unbiased face recognition will be impossible without addressing racism in the criminal justice system it will inevitably be used in.

Achieving these goals may seem unrealistic, but this only demonstrates how pressing the problem is. Sadly, these aren’t hypothetical concerns about a distant dystopian future. Just this month, the Orlando police department renewed its much decried face recognition pilot with Amazon, while New York’s governor announced the face-scanning was soon coming to bridges and tunnels throughout New York City.

Face recognition is being marketed to consumers as a cutting edge convenience, but it has clear ties to surveillance, and ultimately, control. Imagine if every ad or article promoting a “pay with your face” system also showed criminal databases or terror watch lists. If they did, we’d get a more honest look at face recognition’s impact.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.