Axon, America’s biggest police body camera manufacturer, is currently banning the use of face recognition technology in its products after its independent ethics board said Friday that the “technology is not yet reliable enough to justify its use on body-worn cameras” due to “unequal and unreliable performance across races, ethnicities, genders and other identity groups.”
Once primarily known for selling the Taser, Axon is now a $US300 million ($428 million) company selling a suite of technologies including police body cameras to 17,000 law enforcement agencies in over 100 countries. The company also sells so-called “smart weapons” for military and government use.
Over the last decade, Axon has shifted heavily to selling police body cameras. As the technology has proliferated, civil liberties experts are increasingly worried that the devices can be used to covertly introduce heavily criticised face recognition technology into cities.
Barry Friedman, director of the Policing Project at New York University School of Law and member of Axon’s ethics board, said in a statement that “very real concerns” about bias in face recognition technology make it too risky to incorporate into police body cameras.
“At present, there are very real concerns about the accuracy of face recognition, and particularly about biases in the way it identifies people along racial, ethnic and gender lines,” Friedman said. “Until we address and mitigate these challenges, we cannot risk incorporating face recognition technology into policing. The ethics board applauds Axon for acting consistently with this.”
In a statement emailed to Gizmodo, Axon said “we, as a company, agree with” the ethics board’s conclusion that face recognition technology should be excluded from Axon body cameras.
“Current face matching technology raises serious ethical concerns. In addition, there are also technological limitations to using this technology on body cameras,” Axon said. “Consistent with the Board’s recommendation, Axon will not be commercialising face matching products on our body cameras at this time.”
The current ban was first reported by the New York Times.
Last month, San Francisco became the first American city to ban government-run face recognition surveillance.
Nearby Oakland appears poised to do the same, and jurisdictions around the United States are at varying states of examining similar ideas.
In New York, for instance, a proposal would ban landlords from using face recognition surveillance on tenants.
California is considering a state-wide ban on face recognition in police body cams.
There is a deepening well of research underlining the profound problems of face recognition surveillance technology. As Gizmodo previously reported:
The majority of American adults are in police facial recognition database, according to a Georgetown Law study. An MIT study found that Rekognition, Amazon’s popular facial recognition product, struggles to identify the faces of women and people of colour.
A study by the ACLU in 2018 found Amazon Rekognition falsely matched 28 members of Congress to mugshot photos, and that results disproportionately negatively impacted people of colour.
The Axon ethics board, which was staffed by the Policing Project at the request of Axon itself, argues in its 42-page report that in real-world conditions, the kind of difficult-to-predict action a police officer faces daily, “face recognition technology performs quite poorly – both in terms of false negatives and false positives.”
The board also pointed to the disparity in performance between white people compared to people of colour, and men compared to women, due in part to unrepresentative training data that reinforces decades and more of bias.
Last year, Axon established an ethics board focused on artificial intelligence and policing technology. Almost instantly, the increasing use of Axon’s body cameras as surveillance tools became the first point of focus.
Among the set of six conclusions as detailed in their report, the board itself said that AI ethics committees are not nearly as important as government regulation of AI surveillance technologies—regulation that is largely currently absent. The report reads:
“We also are strong believers in the need for government regulations of face recognition. Although we applaud Axon’s restraint and commitment to ethics through the creation and use of this Board, we know that we do not have all the answers and are not representative of all the communities in which face recognition might eventually be deployed.
We cannot rely on private companies to regulate themselves entirely, although we certainly can hold them to high ethical standards. To that end, we call upon governments—federal, state, or local—to step in and fill this regulatory gap, as some have already begun to do.”
In 2018, Microsoft President Brad Smith called for federal regulation of face recognition, and the company won accolades in the press in April for publicizing its decision to not provide face recognition technology to a California law enforcement agency.
“We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology,” Smith wrote. “As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.”
Although Axon will refrain from using face recognition technology in its body cameras for now, it left open the possibility of adopting it at a later date. The company said in its statement that it believes “face matching technology deserves further research to better understand and solve for the key issues identified in the report, including evaluating ways to de-bias algorithms as the board recommends.”
Axon’s ethics board expects to revisit the issue “periodically” as the technology changes.