An Australian startup has developed an app that can identify your name and address with a single photo.
It's already being used by U.S. law enforcement despite a lack of regulation and independent testing.
Until now, Clearview AI wasn't a household name. But a report by the New York Times over the weekend revealed it has been used by hundreds of law enforcement agencies in the U.S. - including the police and FBI - for a few years.
The app was created by Australian Hoan Ton-That, a developer who previously created an unsuccessful iPhone game and an app that added Donald Trump's hair to user's photos.
Clearview AI allows the user to compare a photo to 3-billion strong database of images scraped from Facebook, YouTube, Venmo and other social media sites. This database is made up of regular people and is designed to positively ID a person, even if they don't have a criminal record.
Law enforcement in the U.S. revealed that Clearview AI has been used to solve cases from petty theft to murder. Part of its success is because because it doesn't require a perfect image to identify a suspect.
"With Clearview, you can use photos that aren't perfect," said Detective Sergeant Nick Ferrara to the New York Times. "A person can be wearing a hat or glasses, or it can be a profile shot or partial view of their face."
While this may seem positive, it raises the question of accuracy and whether the app could lead to false convictions. Furthermore, defendants don't have to be told about being identified by the app as long as it wasn't the only evidence used for their arrest.
Proposed laws paving the way for a facial recognition database in Australia have been abandoned after a parliamentary report found they required stronger protections for the privacy of citizens.
Ton-That admitted the system isn't perfect, saying the app works up to 75 percent of the time. Most of the images in the database are taken at eye level, whereas security cameras tend to be mounted on walls and ceilings.
It is not publicly known how often false matches come up due to the lack of independent testing.
The company is said to only use publicly available images, such as public Facebook profile, but changing your privacy settings or deleting images won't necessarily stop photos of you ending up in its system. There is currently no way to remove your photos from the Clearview database if your Facebook profile has already been scraped. The company is apparently working on a tool to allow people to request image removal.
Clearview AI also has control over the image search results. During research for his article, New York Times reporter Kashmir Hill initially saw images of herself come up in the system. The results later disappeared.
"After the company realised I was asking officers to run my photo through the app, my face was flagged by Clearview's systems and for a while showed no matches. When asked about this, Ton-That laughed and called it a 'software bug'," said Hill in the article.
The images were later restored when the company began talking to Hill for the article. Some of the images were from over 10 years ago and others were pictures the author had never seen before. The photos still positively ID her when her nose and lower face were covered.
While Clearview AI isn't publicly available, it's potential for stalking is concerning. The New York times analysed the app code and found language that would allow it to be paired with AR glasses that have the potential to identify people it saw in real time.
Despite lack of testing and legislation around facial recognition still being in the fledgling stages in the U.S., Clearview AI has reportedly been used by over 600 law enforcement agencies over the last year alone without public knowledge.
An Australian government department has shown interest in forcing pornography sites to verify a user's age, and it's willing to offer its facial recognition services to get it done.