Here’s Why Facebook’s FTC Settlement Is A Joke

Here’s Why Facebook’s FTC Settlement Is A Joke

After some rumour-milling and informed reportage attributable to anonymous sources, Facebook has finally made public the contours of its deal with the U.S. Federal Trade Commission to end a probe into its handling of the Cambridge Analytica scandal. From what few specifics are shared by the company’s general council, Colin Stretch (who was supposed to have resigned last year), one thing is quite clear — they got off easy.

This is not just a statement on the $US5 ($7) billion fine Facebook agreed to pay, which is quite a lot of money for regular people, or even fairly large businesses. However, $US5 ($7) billion is approximately what Facebook makes in a single month.

As a number of high-level Democrats including Senators Mark Warner and Ron Wyden, Representative David Cicilline, and FTC Commissioner Rohit Chopra point out though, the settlement does little to hold Facebook accountable, despite Stretch’s assertion that the “accountability required by this agreement surpasses current U.S. law.”

Yes, the U.S. government won vastly increased oversight into Facebook’s day-to-day, and the company’s board will be buffered by “an independent privacy committee of Facebook’s board of directors,” according to the FTC, which itself “must be independent and will be appointed by an independent nominating committee.” Who makes up this committee, performs these appoints, or which individuals or group will act as Facebook’s new “independent privacy assessor” remain open questions.

As Bloomberg points out, however, none of these changes do much to alter Facebook’s core business model of hoovering up as much consumer data as possible. The enormous number of users on the platform and the richness of data collected about them mean Facebook is still a tinderbox. The FTC is essentially telling them that it’s ok to still play with matches, as long as they’re not caught lightning any.

What Facebook would like you to believe is that this agreement will engender consumers to a buck-stops-here attitude towards Mark Zuckerberg. During the 20 years the agreement is active, Stretch writes, “we will have quarterly certifications to verify that our privacy controls are working […] the process stops at the desk of our CEO, who will sign his name to verify that we did what we said we would.”

And as the FTC itself notes, this means Zuckerberg may have less wiggle room to avoid personal penalties down the line if future violations are discovered, though given the weak nature of this very settlement, it’s doubtful such a stipulation would draw any real blood from Zuckerberg’s pallid husk should that situation arise.

When asked if, consistent with Facebook’s stated values on transparency, these quarterly reports would be made public, a spokesperson declined to answer affirmatively or on-the-record, instead writing back that the company would begin proving updates in the coming months.

In what may be the most insulting paragraph of Stretch’s note, which Facebook published exactly when it knew news of former special counsel Robert Mueller’s testimony would drown out any other news item, he writes, “the agreement will require a fundamental shift in the way we approach our work […] It will mark a sharper turn toward privacy, on a different scale than anything we’ve done in the past.”

I don’t know how Facebook approaches its work. What I do know is how it approaches its users — which is to incrementally, and more often after being caught doing something untoward—placate them with promises of fundamental changes in how it’s thinking about or implementing privacy, how it’s empowering us, the consumers, to control our privacy and how privacy, privacy, privacy.

Why would we trust Zuckerberg’s sign-off on quarterly data privacy assessments when he and his team have consistently published statements claiming Facebook will protect our privacy, which we can say in light of Cambridge Analytica turned out to be broadly untrue.

Here are just a few examples from Facebook’s Newsroom page:

  • March 30, 2019, written by Mark Zuckerberg: “People around the world have called for comprehensive privacy regulation in line with the European Union’s General Data Protection Regulation, and I agree.”

  • May 1, 2018: “we’re sharing some of the first steps we’re taking to better protect people’s privacy […] We’re starting with a feature that addresses feedback we’ve heard consistently from people who use Facebook, privacy advocates and regulators: everyone should have more information and control over the data Facebook receives from other websites and apps that use our services.

  • April 17, 2018: “In recent weeks we’ve announced several steps to give people more control over their privacy and explain how we use data [..] We not only want to comply with the law, but also go beyond our obligations to build new and improved privacy experiences for everyone on Facebook

  • April 16, 2018: “As Mark said last week, we believe everyone deserves good privacy controls.”

  • April 4, 2018: “It’s important to show people in black and white how our products work – it’s one of the ways people can make informed decisions about their privacy.”

  • March 28, 2018: “we’re taking additional steps in the coming weeks to put people more in control of their privacy […] We’ve worked with regulators, legislators and privacy experts on these tools and updates.”

  • November 27, 2017: “Protecting people’s privacy is central to how we’ve designed our ad system.”

  • May 22, 2014: “Over the next few weeks, we’ll start rolling out a new and expanded privacy checkup tool, which will take people through a few steps to review things like who they’re posting to, which apps they use, and the privacy of key pieces of information on their profile […] Everything about how privacy works on Facebook remains the same.”

  • October 23, 2013: “On Facebook, you control who you share with […] We take the safety of teens very seriously, so they will see an extra reminder before they can share publicly […] they’ll see a reminder that the post can be seen by anyone, not just people they know, with an option to change the post’s privacy.”

  • January 28, 2013: “Last year, we launched improved privacy tools that let people see what they’ve shared, to see what photos have been tagged of them, and to be able to take action if there’s something they don’t like.”

  • December 21, 2012: “Along with the overall effort to continue bringing privacy controls up front, we’re adding in-context notices throughout Facebook.”

  • September 30, 2012: “We wanted to share some of the ways we have carefully designed our versions of the features with your privacy in mind”

  • November 29, 2011, written by Mark Zuckerberg: “With each new tool, we’ve added new privacy controls to ensure that you continue to have complete control over who sees everything you share […] I’m committed to making Facebook the leader in transparency and control around privacy.”

  • May 26, 2010: “Facebook today responded to user comments and concerns about privacy by announcing it will introduce simpler and more powerful controls for sharing personal information […] Starting with the changes announced today, the company will also prioritise ease-of-use in its privacy design.

  • December 9, 2009: “many users have expressed that the current set of privacy choices are confusing or overwhelming. In response, the Privacy Settings page has been completely redesigned with a goal of making the controls easy, intuitive and accessible.”

  • August 27, 2009: “Facebook today announced plans to further improve people’s control over their information and enable them to make more informed choices about their privacy.”

  • October 16, 2007: “When Mark and his co-founders built the Facebook website in 2004, privacy was a core tenet. This was evident early on by the segmented structure of networks and extensive privacy options […] Facebook will continue to develop sophisticated safety technology and offer users extensive privacy controls so they can make their information available only to the people they choose.”

  • September 26, 2006: “Facebook has launched additional privacy controls with this expansion that allow every user to: • Block other users in specific networks from searching for his or her name. • Prevent people in those networks from messaging, poking and adding him or her as a friend. • Control whether his or her profile picture shows up in search results.”

  • September 8, 2006: “Facebook, the Internet’s leading social utility, today announced additional controls for News Feed and Mini-Feed in response to user feedback and to reaffirm its commitment to industry-leading privacy practices.”

“We have heard that words and apologies are not enough and that we need to show action,” Stretch wrote today, tapping another beloved vein of Facebook’s mine of excuses:

  • April 29, 2019: “Over the past two years, we have made significant improvements in how we monitor for and take action against abuse on our platform.”

  • December 18, 2018: “We know that we need to do more: to listen, look deeper and take action to respect fundamental rights.”

  • November 15, 2018: “The fact that victims typically have to report this content before we can take action can be upsetting for them.”

  • October 26, 2018: “Our elections war room has teams from across the company, including from threat intelligence, data science, software engineering, research, community operations and legal. These groups helped quickly identify, investigate and evaluate the problem, and then take action to stop it.”

  • October 22, 2018: “We use reports from our community and technology like machine learning and artificial intelligence to detect bad behaviour and take action more quickly.”

  • September 13, 2018: “This will help us identify and take action against more types of misinformation, faster.”

  • August 28, 2018: “We want to make it more difficult for people to manipulate our platform in Myanmar and will continue to investigate and take action on this behaviour.”

  • July 25, 2018: “We use reports from our community and technology like machine learning and artificial intelligence to detect bad behaviour and take action more quickly.”

  • May 23, 2018: “We also take action against entire Pages and websites that repeatedly share false news, reducing their overall News Feed distribution.”

  • December 19, 2017: “We review reports and take action on abuse, like removing content, disabling accounts, and limiting certain features like commenting for people who have violated our Community Standards.”

  • June 15, 2017: “Because we don’t want terrorists to have a place anywhere in the family of Facebook apps, we have begun work on systems to enable us to take action against terrorist accounts across all our platforms, including WhatsApp and Instagram”

  • And March 23, 2012, in an action/privacy two-for-one: “We’ll take action to protect the privacy and security of our users, whether by engaging policymakers or, where appropriate, by initiating legal action, including by shutting down applications that abuse their privileges.”

No company has ever been as so profoundly full of shit as Facebook. While lawmakers are right to criticise this settlement as a slap on the wrist, ultimately no perfect deal could have been brokered: nothing short of the complete dissolution of Facebook would suffice to undo the deep breach of trust Zuckerberg has sown.