Clearview AI, the shady face recognition firm which claims to have landed contracts with federal, state, and local cops across the country, has landed a roughly $US50,000 ($69,410) deal with the U.S. military for augmented reality glasses.
First flagged by Tech Inquiry’s Jack Poulson, Air Force procurement documents show that it awarded a $US49,847 ($69,198) contract to Clearview AI for the purposes of “protecting airfields with augmented reality facial recognition; glasses.” The contract is designated as part of the Small Business Innovation Research (SBIR) program, meaning that Clearview’s contract is to determine for the Air Force whether such applications are feasible.
The contract isn’t described further, but the most obvious possibility is that the Air Force wants to equip security personnel at its facilities with AR glasses that will enable them to verify on the fly whether someone is or isn’t authorised personnel. This theory dovetails with the way Clearview’s technology already works — users upload pictures into an app that is then compared against the company’s database of faces. Back in 2020, the New York Times reported that Clearview’s app contained code that would allow pairing with AR glasses, theoretically meaning users could walk around identifying anyone whose image had already been obtained by Clearview’s data-scraping operations.
Clearview has been the subject of massive controversy pretty much everywhere it pops up, and for good reason. The Huffington Post reported that its founder, Hoan Ton-That, and other individuals that worked for the company have “deep, longstanding ties” to far-right extremists. Whether Clearview obtained the photos it uses to populate its databases and train its face recognition algorithms legally is also a matter of dispute. Ton-That has bragged that its databases have billions of photos scraped from the public web. While mass-downloading publicly accessible data is legal in the U.S., some states have biometrics privacy laws on the books — most notably Illinois, where Clearview is battling an ACLU-backed lawsuit claiming the company was legally required to obtain the consent of people entered into its database.
In other countries, Clearview has run into more stringent opposition. In May 2021, regulators in France, Austria, Italy, Greece, and the United Kingdom collectively accused it of violating European data privacy laws. Clearview exited Canada entirely in 2020 after two federal privacy investigations, and Canadian privacy Commissioner Daniel Therrien said in February 2021 that Clearview’s technology broke laws requiring consent for collection of biometrics and constituted illegal mass surveillance. Canadian authorities demanded that Clearview delete images of its nationals from its database, with Australian regulators issuing similar demands later that year.
Ton-That insisted in an email statement to Gizmodo that the technology being tested with the Air Force does not include access to its troves of scraped images.
“We value the United States Air Force, and their position in defending the nation’s security and interests,” Ton-That wrote. “We continually research and develop new technologies, processes, and platforms to meet current and future security challenges, and look forward to any opportunities that would bring us together with the Air Force in that realm.”
“This particular technology remains in R&D, with the end goal being to leverage emerging capabilities to improve overall security,” he added. “The implementation is designed around a specific and controlled dataset, rather than Clearview AI’s 10B image dataset. Once realised, we believe this technology will be an excellent fit for numerous security situations.”
Face recognition is already being used by cops and the feds. Clearview, for example, has signed contracts with the FBI and U.S. Customs and Immigration Enforcement. That’s despite current face recognition tech’s reputation for being unreliable, easily abused for racial profiling, and generally invasive. The idea that police could get their hands on goggles that would allow them to run everyone they see against a face recognition database, for example, is pretty dystopian.
The U.S. military has expressed interest in AR for obvious reasons — the many ways in which digital overlays could enhance the productivity, efficiency, and lethality of troops — but the technology is in its nascent stages. The Air Force is currently testing the use of AR goggles to assist in aircraft maintenance training and operations, and it has done proof of concept work related to weapons training and virtual command centres. Last year, the U.S. Army delayed a $US22 ($31) billion program to equip soldiers with AR goggles, the Integrated Visual Augmentation System (IVAS), saying it wouldn’t be ready for deployment until at least fall 2022.
IVAS is based on Microsoft HoloLens 2 and has been tested since 2019. According to Task and Purpose, it can be used for training, live language translation, face recognition, navigation, providing situational awareness, and projecting locations or objectives. It also contains the kind of high-resolution thermal and night sensors that previously would have been separate gear. Bloomberg reported earlier this month, however, that internal Pentagon assessments have deemed it as nowhere near ready for use in actual combat and only 5,000 goggles have actually been ordered yet. Testing to determine whether soldiers can rely on IVAS in combat scenarios won’t be carried out until May.
An Air Force Research Lab public affairs director didn’t immediately respond to Gizmodo’s request for comment, we’ll update this piece when they do.
Editor’s Note: Release dates within this article are based in the U.S., but will be updated with local Australian dates as soon as we know more.