New U.S Bill Aims To Hold Tech Companies Accountable For Their Shitty Algorithms

New U.S Bill Aims To Hold Tech Companies Accountable For Their Shitty Algorithms

Over the years, tech companies have found a convenient scapegoat for some of their most egregious mistakes in a voiceless part of their operations—their algorithms. But a bill introduced on Wednesday in the U.S seeks to effectively prevent these automated systems from being an acceptable excuse for bias, unintended or not.

The Algorithmic Accountability Act, introduced by American Senators Ron Wyden and Cory Booker as well as Representative Yvette D. Clarke, would require companies to assess the risks of their algorithms on consumers and ensure that they are deployed in a way that doesn’t result in “inaccurate, unfair, biased or discriminatory decisions impacting Americans,” according to a press release published by Senator Wyden’s office on Wednesday.

The requirements detailed in the bill would be enforced by the Federal Trade Commission (FTC) and would apply to companies that are already regulated by this agency and make over $70 million a year, according to the bill.

It would also be enforceable for companies that have data on over 1 million consumers or consumer devices, even if they don’t make over $70 million a year. Violations would be treated under the FTC’s unfair or deceptive acts or practices.

The bill defines an “automated decision system” as “a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.”

Plenty of massive tech companies — including Facebook, Google, and Amazon — have furthered their research and funding into machine learning systems, whether it’s to moderate their platforms or develop powerful surveillance tools.

The Algorithmic Accountability Act would force wealthy companies—or companies with a wealth of consumer data (or both) — to assess whether the data feeding these automated systems and the subsequent decisions made with this data are inclined to bias.

To date, machine learning systems have proven to be consistently flawed, with biased or inaccurate decisions harming oftentimes the most vulnerable populations of consumers, such as women and people of colour.

Under this legislation, companies would be required to create assessments on their algorithmic systems, which would include “a detailed description of the automated decision system, its design, its training, data, and its purpose,” according to the bill.

These assessments would include the system’s benefits and costs, which would feature details on data minimisation practices, how long data and decisions are stored for, what type of information is available to consumers, if they are able to fix or appeal any of the system’s decisions, and who ultimately these decisions are sent to. They would also include the potential risks of the system, and how they plan to minimise them.

Algorithms have biases in large part because the humans creating them have biases. These biases are also contingent on the datasets these machines are being trained on (which are, again, chosen by humans).

Ethics in machine learning is still a relatively new intersection of thought, and this bill appears aimed at forcing companies to think about the way their algorithms work, who they might harm or disenfranchise, and how they are actively working toward more fair, accurate, and just systems. What the bill doesn’t specify is the checks and balances for the claims in these reports.

The algorithmic space, especially some of the more troubling aspects of that, like facial recognition, remains largely lawless in its current state, so legislation like this only serves to create more accountability and transparency.

But it’s still a pretty broad bill for an issue that has a lot of nuances, and it’s all but assured that companies would still figure out ways to evade accountability by continuing to blame the machines.

Simply forcing companies to demonstrate more transparency without a very clear plan on how regulatory bodies will flag and deal with red flags doesn’t fully solve the issue of discriminatory systems. And we probably shouldn’t expect a piece of legislation to do that, at this point.

The bill does create a paper trail on the stated purpose and design of these systems, but beyond that, it is still pretty vague on how the agency will specifically identify issues.

[Senator Ron Wyden]


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.