Lawmakers Warn Clearview AI Could End Public Anonymity if Feds Don’t Ditch It

Lawmakers Warn Clearview AI Could End Public Anonymity if Feds Don’t Ditch It
A security camera in the Port Authority Trans-Hudson (PATH) station at the World Trade Centre in New York in 2007; used here as stock photo. (Photo: Mario Tama, Getty Images)

Democratic lawmakers are ratcheting up efforts to limit the federal government’s work with notorious surveillance firm Clearview AI. In a series of letters addressed to the Departments of Justice, Defence, Homeland Security, and the Interior on Wednesday, the lawmakers called on the agencies to end their use of the company’s tech, arguing the tools “pose a serious threat to the public’s civil liberties and privacy rights.” The agencies named in the letters were all identified in a Government Accountability Office report released last year as having used Clearview AI tools in domestic law enforcement activities.

The letters were co-signed by four progressive politicians, Sens. Ed Markey and Jeff Merkley and Reps. Pramila Jayapal and Ayanna Pressley. In their letter to the DHS, the lawmakers claimed Clearview AI’s tech — which reportedly relies on a database of more than 4 billion faces, many of which are scraped from the open internet — could effectively eliminate the notion of public anonymity if left unchecked.

“In conjunction with the company’s facial recognition capabilities, this trove of personal information is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified,” the lawmakers wrote.

Clearview AI’s partnerships with government agencies are of particular concern, the authors argued, because a public that believes they are being surveilled by their government may be less likely to engage in civic discourse or other activities protected by the First Amendment. The lawmakers went on to express concerns over facial recognition’s “unique threats to marginalised communities,” citing previous research showing how the technology performs worse when trying to identify people with darker complexions and Black women in particular.

In an emailed statement to Gizmodo, Clearview AI CEO Hoan Ton-That said a National Institute of Standards and Technology test of the company’s tech “shows no detectable racial bias,” and said he wasn’t aware of any instance where Clearview AI’s technology has resulted in a wrongful arrest. In his statement, Ton-That pointed to data from the Innocence Project, which claims 70% of wrongful convictions result from eyewitness lineups, a figure he used to argue in favour of Clearview’s comparatively higher accuracy rates.

“Clearview AI is able to help create a world of bias-free policing,” Ton-That claimed. “As a person of mixed race this is highly important to me.”

While those figures on their own seem informative, they fail to account for the sheer scope and scale of Clearview’s pervasive technology. They also fail to address really any of the broader privacy or civil liberties concerns that have most troubled advocates, particularly as it pertains to Clearview AI.

“We are proud of our record of achievement in helping over 3,100 law enforcement agencies in the United States solve heinous crimes, such as crimes against children and seniors, financial fraud and human trafficking,” Hoan Ton-That added.

In their letters, the lawmakers partially addressed these points, arguing the potential threats posed by facial recognition extend beyond accuracy claims.

“Communities of colour are systematically subjected to over-policing, and the proliferation of biometric surveillance tools is, therefore, likely to disproportionately infringe upon the privacy of individuals in Black, Brown, and immigrant communities,” the lawmakers wrote. “With respect to law enforcement use of biometric technologies specifically, reports suggest that use of the technology has been promoted among law enforcement professionals and reviews of deployment of facial recognition technology show that law enforcement entities are more likely to use it on Black and Brown individuals than they are on white individuals.”

This isn’t the first time these lawmakers have taken on facial recognition. Back in 2020, the same Democrats authored the Facial Recognition and Biometric Technology Moratorium Act, which sought to end federal use of real-time facial recognition technology. That bill would have also limited the state’s access to federal grants if they chose to continue using facial recognition. At the time, the legislation gained the endorsement of a litany of civil liberty and privacy groups, including the American Civil Liberties Union, Electronic Frontier Foundation, Fight for the Future, Colour of Change, MediaJustice, Electronic Privacy Information Centre, and Georgetown University Law Centre’s Centre on Privacy & Technology, among others.

Around two dozen cities and states across the U.S, including San Francisco, Boston, and Minneapolis, have stepped up their efforts to curtail public facial recognition use in recent years, though a federal data privacy law has remained elusive.

You can read the letters in full here:


Editor’s Note: Release dates within this article are based in the U.S., but will be updated with local Australian dates as soon as we know more.