Microsoft's president and chief legal officer, Brad Smith, called for federal regulation of face recognition in the U.S. in a new blog post. Half of all adults already have their face in a federal database, and vendors are supplying face recognition technology to schools, airports, and baseball stadiums. Federal regulation could help meet numerous privacy concerns while also giving the public a voice in the tech's advancement, he argues.
Smith calls for federal regulation because, in our currently unregulated state, leaving individual companies to make ethical decisions on face recognition "is an inadequate substitute for decision making by the public and its representatives in a democratic republic."
"We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology," Smith writes. "As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government."
Smith urges congress to convene a bipartisan commission of experts to essentially create a homegrown version of the GDPR, a regulatory framework that balances both the potential of face recognition with the need to prevent misuses. "This should build on recent work by academics and in the public and private sectors to assess these issues and to develop clearer ethical principles for this technology," he writes.
The statement continues by prompting readers to consider whether we should press the government to install varying regulatory measures advanced by privacy experts. It's widely know that face recognition software can be buggy and inaccurate on darker skinned people. Smith raised the question of a federal law defining a minimum performance level on accurate identifications, banning face recognition software with unacceptably high mis-identification rates. Another possibility: requiring police agencies to post public notices anywhere face recognition is used on the public. This would apply to retailers as well, who have quietly sought patents to identify shoppers and match them with details on their preferences.
"It may seem unusual for a company to ask for government regulation of its products, but there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike," Smith writes. Theoretically, setting a minimum benchmark for face recognition accuracy would push all suppliers to refine the tech, prompting fewer false positives. If shoppers are informed which stores use face recognition, they could avoid those locations, sending a clear message to companies on whether they consent to the practice.
The statement is as robust a discussion we've seen on the topic from Microsoft, which received lukewarm praise after announcing they'd begun addressing the racial disparities in its own face recognition software. Which brings us to Microsoft's recent moral crisis prompted by its contract with ICE. Smith's post brings up the backlash, reiterates that the company doesn't currently supply ICE with face recognition technology, then moves on.
But consider the following anecdote from today's NYT write-up:
April Isenhower, a Microsoft spokeswoman, declined to answer questions about whether the company provided facial recognition services to other government agencies. She also declined to discuss the company's position on consumer consent for facial recognition.
Microsoft remains its own best example of the limits of asking Silicon Valley to self-disclose anything incriminating. Still, face recognition is just one of a whole suite of technologies — including body cameras, drones and so-called smart policing, in need of regulation, public input and, what's missing from Smith's blogpost, the ability for the American people to say no.