In its latest privacy faux pas to come to light, Facebook confirmed it built a now-defunct app for its employees that used facial recognition technology to identify coworkers and their friends, CNET reported this week.
Business Insider first broke the story Friday, citing multiple anonymous sources who claimed employees could simply point their phone at someone for the app to recognise that person’s name and profile picture, and that one version—provided you fed it enough data—could track down anyone on the platform. It was purportedly developed between 2015 and 2016, a.k.a. pre-Cambridge Analytica scandal and subsequent heightened federal scrutiny, and has since been discontinued.
In a statement to CNET, a Facebook spokesperson denied that it was capable of identifying any Facebook user, as it was “only available to Facebook employees, and could only recognise employees and their friends who had face recognition enabled.” According to the spokesperson, Facebook’s teams routinely build internal software like this “as a way to learn about new technologies.” The company did not immediately respond to Gizmodo’s request for comment.
While it may seem like much ado about a defunct company app, this news gives us a better idea on the extent to which Facebook experimented and fine-tuned the kind of facial recognition technology it later incorporated into its platforms in 2017, which then became the subject of fiery consumer pushback and a federal investigation. All so that users wouldn’t have to deal with the hassle of tagging their friends in photos.
While I’m certainly side-eyeing the supposed capabilities of this company app and what that means for any future software Facebook develops, from the amount of public and federal heat the company’s currently under regarding its face tech, it wouldn’t make much marketing sense for ol’ Zuck to power forward on that front just yet.
In response to these criticisms—or, more likely, the FTC mandates the company incurred along with a $US5 ($7) billion fine—Facebook’s adopted new regulations to increase transparency on its platforms when it comes to using its facial recognition software, such as no longer turning it on by default because that’s incredibly creepy. And yet, as we now know based on reports about this internal app, things could have been far, far creepier.