Remember that facial recognition startup found being used by law enforcement agencies around the world last year? Well today, Australia’s Privacy Commissioner has ruled they breached the country’s privacy laws. It was ruled Clearview AI breached Australians’ privacy by scraping their biometric information from the web and disclosing it through a facial recognition tool.
The ruling by the Office of the Australian Information Commissioner (OAIC) was coming – they kicked off the inquiry back in July last year, alongside their UK counterparts. And on Wednesday, the commissioner declared Clearview AI breached the Australian Privacy Act on multiple fronts, by:
- collecting Australians’ sensitive information without consent
- collecting personal information by unfair means
- not taking reasonable steps to notify individuals of the collection of personal information
- not taking reasonable steps to ensure that personal information it disclosed was accurate, having regard to the purpose of disclosure
- not taking reasonable steps to implement practices, procedures and systems to ensure compliance with the Australian Privacy Principles.
The controversial tech startup shocked the world when it was revealed it had scraped images on the internet for faces, entered them into its facial recognition database and provided them to law enforcement officials worldwide to search.
Clearview AI’s facial recognition tool includes a database of more than three billion images taken from social media platforms and other publicly available websites.
As a result of its actions, Australia’s privacy watchdog issued Clearview AI with determination orders. These orders require that Clearview AI cease collecting facial images and biometric templates from individuals in Australia, and to destroy existing images and templates collected from Australia.
According to the OAIC, its determination highlights the lack of transparency around Clearview AI’s collection practices, the monetisation of individuals’ data for a purpose entirely outside reasonable expectations and the risk of adversity to people whose images are included in their database.
Privacy Commissioner Angelene Falk said the covert collection of this kind of sensitive information is “unreasonably intrusive and unfair”.
“It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database,” she said.
“By its nature, this biometric identity information cannot be reissued or cancelled and may also be replicated and used for identity theft. Individuals featured in the database may also be at risk of misidentification.”
She said the practices undertaken by Clearview AI fall well short of Australians’ expectations for the protection of their personal information. She also said the privacy impacts of Clearview AI’s biometric system were not necessary, legitimate and proportionate, nor did they have regard to any public interest benefits.
It’s not over, however, the OAIC is currently finalising an investigation into the Australian Federal Police’s trial use of the technology. In April last year, the AFP admitted to using Clearview AI, despite not having an appropriate legislative framework in place, to help counter child exploitation.