Clearview AI, the American maker of a controversial facial recognition tool used mostly by police, is facing a downpour of legal complaints across Europe as of Thursday, alleging scopious privacy violations based on internal documents showing the company’s algorithm at work.
Complaints filed with privacy watchdogs in five countries — France, Austria, Italy, Greece and the United Kingdom — by a group of privacy and human rights organisations allege systemic illegality at Clearview, a New York City-based startup whose clients have reportedly included some 2,000 U.S. taxpayer-funded agencies.
“European data protection laws are very clear when it comes to the purposes companies can use our data for” said Ioannis Kouvakas, a legal officer at Privacy International, one of four groups behind the complaints. “Extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users.”
Clearview, the subject of several damning BuzzFeed News reports — describing encouragement of abuse, widespread unauthorised access by police, and plans to court repressive regimes overseas — was first revealed by the New York Times in January 2020 to have amassed a database of more than three billion images scraped from Facebook and elsewhere without consent. Accessible via mobile app, the massive tranch serves to generate biometric profiles of as many people as possible, ostensibly to put names to the faces of persons wanted by police.
According to BuzzFeed, Clearview’s product has been used or tested at more than 200 companies, including many retail giants such as Best Buy, Home Depot, and Walmart. A months-long investigation by the site this year found, what’s more, that police officers at at least dozens of departments had downloaded and used the app without their department’s knowing.
“Clearview seems to misunderstand the internet as a homogeneous and fully public forum where everything is up for grabs,” said Privacy International’s Lucie Audibert, who views the company’s actions as threatening to the “numerous rights and freedoms” enabled by an open internet.
Thursday’s complaints accuse Clearview of vaccuming up countless photos of individuals inside E.U. countries, contravening a number of privacy protections; particularly those enumerated under the GDRP, Europe’s data privacy law, and its United Kingdom equivelent.
Privacy International’s complaints in the U.K. and France are joined by simultaneous filings by the Hermes Centre for Transparency and Digital Human Rights in Italy, Homo Digitalis in Greece, and noyb – the European Centre for Digital Rights in Austria.
Clearview’s processing of personal data was declared illegal this year by the data protection authority in Hamburg, Germany’s second largest city. The ruling arose from a complaint filed by Matthias Marx, a German computer scientist and Chaos Computer Club member — photographs of whom Clearview used to generate a biometric profile without his knowledge or consent.
Marx was able to learn of his biometric profile by sending the company a data subject access request (DSAR), a legal tool in Europe for compelling companies to release copies of stored personal data to their owners. In January, after finding Clearview had violated the law, the Hamburg Data Protection Authority (DPA) ordered the unique mathematical value forming Marx’s biometric identity be deleted.
The DPA dismissed multiple arguments Clearview offered up in its defence. Whereas the GDPR extends to non-European companies so long as they are “monitoring” people inside the E.U., Clearview had rejected the idea Marx had been monitored over any period of time. The company had merely, it said, provided a “snapshot of some photos available on the internet.”
In a rebuke, the DPA pointed to a photo of Marx scraped by Clearview from a stock image website. It included text not only identifying him as a “student” but placed him physically in Hamburg on a particular date. “Accordingly,” the DPA said, Clearview does not merely offer a snapshot “but evidently also archives sources over a period of time.”
Naming specific infractions, the authority stated that a person’s behaviour is considered “montored” anytime it’s recorded in a “target[ed] manner and stored in the form of personal data.” “Systematic recording is not necessary,” it added. “The sensitivity of the monitored behaviour is irrelevant. The motive for the monitoring is also irrelevant.”
But disappointing both Marx and privacy groups long critical of Clearview, the DPA’s deletion order was narrowly focused — only he and biometric his profile were covered.
“This surveillance machine is terrifying,” Marx said at the time. “Almost one year after my initial complaint, Clearview AI doesn’t even have to delete the pictures that show me. And even worse, every individual must submit their own complaint. This shows that our data is not yet sufficiently protected and that there is a need for action against biometric surveillance.”
The new complaints, which regulators have three months to address, cite “various” additional data requests filed by other individuals, Privacy International said. The filers contend the documents exemplify a pattern of unlawful activity by Clearview across the region.
Clearview AI could not be immediately reached for comment.
Last May, Clearview CEO Hoan Ton-That told the Wall Street Journal that it deletes data on people in the E.U. upon request. While law enforcement in the E.U. had tested its facial recognition technology, he said at the time, the company had no customers in the Union.
Nevertheless, marketing materials obtained by BuzzFeed only three months prior show that Clearview had touted plans for “rapid international expansion,” which included nine E.U. countries.