The deployment of facial recognition technology by government bodies in cities around the world continues to spread, even as lawmakers and technologists point out the ethical issues still plaguing these systems.
That’s why a San Francisco elected official’s recent proposal is so important — it effectively bans the city from using the technology.
Aaron Peskin, a member of the San Francisco Board of Supervisors, introduced the Stop Secret Surveillance Ordinance proposal on Tuesday. Its scope is more far-reaching than just prohibiting the government’s use of facial recognition tech or any data gleaned from it—it also details how the city will be held more accountable for how it uses surveillance technology in general.
The proposal states that city departments submit both an ordinance and an impact report on the surveillance tech they want to a board of supervisors to review. This also applies to departments that have already funded and deployed surveillance tech. Departments operating this tech will also be audited each year.
While facial recognition tech is certainly a system of surveillance, it won’t be considered by the review board. Under the proposal, no department is allowed to “obtain, retain, access, or use” the face recognition or any information from it.
“The propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits, and the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring,” the proposal states.
The Surveillance Impact Report must include, at minimum, a number of required details about the use of the technology, such as how it works, its purpose, its fiscal cost, and how it might affect the community and their rights. The proposal defines surveillance technology as: “any software, electronic device, system utilising an electronic device, or similar device used, designed, or primarily intended to collect, retain, process, or share audio, electronic, visual, location, thermal, biometric, olfactory or similar information specifically associated with, or capable of being associated with, any individual or group.”
It also lists some examples, such as automatic licence plate readers, CCTV cameras, wearable body cameras, and software to identify criminal activity, to name a few. It does not include things like office hardware, city databases, cybersecurity systems, and infrastructure control systems.
Facial recognition tech has yet to prove that it is an effective tool free from bias—there are plenty of examples of racist systems. Last July, the ACLU released a report which found that Amazon’s facial recognition product, Rekognition, incorrectly identified members of Congress as suspected criminals, and a disproportionate number of them were people of colour.
(Amazon claimed the ACLU did not use a high enough accuracy threshold in its tests.)
And a review of facial recognition tech last year found that it was most accurate when classifying white men.
Peskin’s proposal notes that surveillance technology, in general, is a privacy issue for everyone, and that efforts in this space “have historically been used to intimidate and oppress certain communities and groups more than others, including those that are defined by a common race, ethnicity, religion, national origin, income level, sexual orientation, or political perspective.”
The ordinance still needs a signature from the mayor in order to be officially enacted, and if that happens, it will be enforced 30 days later. This proposal is a reassuring sign that governments and tech giants may have to persuade empowered gatekeepers and the public that their likely invasive and potentially flawed and biased systems should be deployed citywide—unless it’s facial recognition tech. They can leave that the door.