The most powerful tool in Facebook’s considerable arsenal isn’t its incalculable trove of user data or yearly revenue that exceeds the GDP of more than a few sovereign nations — it’s the platform’s ability to spread out any PR disaster over months or years so that all but the most dogged beat reporters stop giving a crap entirely.
There was the slow stream of admissions around Facebook’s role in the 2016 US election, which started with its CEO calling the notion of using his platform to engage in interference through misinformation “a pretty crazy idea”. Combatting election manipulation is now one of the company’s stated priorities.
Recall also the quiet, protracted fessing-up Facebook did around the part it played in Myanmar’s genocide against Muslims.
Most famously, this strategy benefited Facebook during the Cambridge Analytica scandal, where successive drips of information showed the company’s knowledge of possible malfeasance stretched back farther and the scale of misuse was larger than initially reported, all as momentum around the rapidly complicating story dwindled.
Unfortunately, the strategy appears to be working, more or less.
You might be wondering why we’re talking about this again, now, and the answer, of course, is that Facebook is pulling its favourite trick again. Remember how, in the wake of the Cambridge scandal, the platform decided to take a long, hard look at how much access it was providing to developers and the largely unvetted apps they were dumping onto the social network?
Zuckerberg promised, in March of 2018, to “investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014”. By May, the platform suspended around 200 apps, and that number doubled by August. Now, over a year later, Facebook informed the public that “tens of thousands have been suspended for a variety of reasons”.
Tens of thousands! How many tens? What happened to all the numbers between 400 and a theoretically infinite number of thousands? By way of explanation, Facebook noted that:
We initially identified apps for investigation based on how many users they had and how much data they could access. Now, we also identify apps based on signals associated with an app’s potential to abuse our policies.
The platform claims that, somehow, these tens of thousands of potentially abusive apps were only associated with around 400 individual developers, and that many were “still in their testing phase”.
Will we, at some point in the future, learn our data was even less secure than we currently believe and that there were considerably more untoward apps floating around Facebook? As the company repeatedly noted in this week’s blog post, “our investigation is not yet complete”.