Most people are aware that algorithms control what you see on Facebook or Google, but automated decision-making is increasingly being used to determine real-life outcomes as well, influencing everything from how fire departments prevent fires to how police departments prevent crime. Given how much these (often secretive) systems have come to dominate our lives, it’s time we got specific about how algorithms can hurt people. A new report seeks to do just that.
This week, the Future of Privacy Forum released endanger thousands of children. At its core, the report is about finding the language to address a complicated problem that we should all be thinking more about: algorithms can be unfair, illegal and useful, all at the same time. So what should we do about them?
“They’re more, sort of, societal, philosophical questions, at the moment,” Lauren Smith, who co-authored the report, tells Gizmodo. “There’s no clear way to create those overall rules.”
To grapple with these questions, the report features two charts. The first lists the potential harms of automated decision-making, categorising the negative effects of algorithms based on how they hurt people, whether they harm individuals or larger groups, and whether there are existing legal standards for addressing them.
The second chart looks at ways to solve or at least reduce these algorithmic harms. For the ones that are potentially covered by existing legal standards (or are already illegal), there are clearer ways to mitigate negative consequences, such as contacting authorities. For the ones that aren’t, Smith and her team argue that the absence of laws doesn’t necessarily mean there should be more rules.
“We wouldn’t characterise this as ‘the ones without legal analogs need legislation,'” Smith told Gizmodo. “The ones with a legal analogue represent those core values that we have in society that we’ve already enshrined in law. The tactics for mitigating harms that occur anyway through technology should be distinct from ones that are sort of posing these new questions, introducing these new societal debates.”
Let’s compare two types of algorithmic harms. In the first case, a bank uses an algorithm designed to deny all loan applications from black women. Here, there’s an existing law being broken that can be prosecuted. Now compare that example with “filter bubbles,” the kind of ideological echo chambers on social media that, in the worst cases, can radicalize people to violence. While also potentially dangerous, this problem isn’t covered by any current laws or related “legal analogs” – and Smith isn’t sure it should.
“Is it the responsibility of the technology platform to analyse your data and say, ‘Well, this person has these views. We want to ensure that 30 per cent of the news that they see comes from a different political perspective’? I’m not sure that’s a position that consumers want them to be in.”
As algorithmic tools become entrenched in every aspect of our lives, tech has struggled to define its role in mitigating harm. Should Facebook use AI to filter out hate speech? Should Google intervene when its search protocols show higher paying jobs to men than to women? We’re just developing an understanding of fairness in algorithms, and part of that evolution is understanding how limited we are in forming a consensus in how to address these problems as they arise. Smith points to the design process.
“There’s a big role for design there and a big role for internal processes,” Smith says, “and for [creating] ethical frameworks, or internal IRBs [institutional review boards] to be part of how they’re thinking about and understanding this.”
Reconfiguring the design process so that more ethical and philosophical questions are considered before an algorithm is put into place will go a lot farther than simply relying on regulation. The first step is finding the words.
“These technologies are just beginning to evolve, so to study and understand the impacts they’re having will go a long way towards thinking about what harms they may cause and how to mitigate them,” says Smith.