AI Experts Say US’ Predictive ‘Extreme Vetting’ Of Immigrants Is ‘Tailor-Made For Discrimination’

AI Experts Say US’ Predictive ‘Extreme Vetting’ Of Immigrants Is ‘Tailor-Made For Discrimination’

An alliance of more than 50 civil liberties groups and more than 50 individual AI experts sent dual letters to the US Department of Homeland Security (DHS) today, calling for the end of a plan to screen immigrants with predictive “extreme vetting” software. In a separate petition also launched today, several groups specifically urged IBM not to help build the extreme vetting tool. This winter, representatives of IBM, Booz Allen Hamilton, LexisNexis and other companies attended an information session with DHS officials interested in their capacity for predictive software, The Intercept reports.

Photo: AP

As part of the Trump Administration’s controversial immigration overhaul, Homeland Security’s US Immigration and Customs Enforcement (ICE) proposed an “Extreme Vetting Initiative” (echoing Trump’s own words) to eventually create predictive software that automates the vetting process by using algorithms to “determine and evaluate an applicant’s probability of becoming a positively contributing member of society, as well as their ability to contribute to national interests”. In their letter to the DHS, dozens of AI experts called the algorithm, which would also attempt to predict terroristic leanings, “tailor-made for discrimination”.

Privacy advocates and civil rights groups have long been sceptical of predictive software. Last year, Pro Publica found racial biases in algorithms used to predict a criminal’s likelihood of reoffending. Black criminals were routinely predicted as more likely to reoffend than white criminals, even if their crimes were less severe. AI experts voiced concerns that extreme vetting algorithms could replicate these same biases “under a veneer of objectivity” in their letter:

Inevitably, because these characteristics are difficult (if not impossible) to define and measure, any algorithm will depend on “proxies” that are more easily observed and may bear little or no relationship to the characteristics of interest. For example, developers could stipulate that a Facebook post criticising US foreign policy would identify a visa applicant as a threat to national interests. They could also treat income as a proxy for a person’s contributions to society, despite the fact that financial compensation fails to adequately capture people’s roles in their communities or the economy.

“Contribution to society” is, of course, an entirely subjective concept and however it’s defined by developers will naturally reflect their biases. There’s no single indicator, so developers must choose quantifiable data points that, when synthesised together, can be indicative of something as nebulous as the “probability of becoming a positively contributing member of society” sought by ICE.

That introduces a lot of ethical problems. First, which data points will be included? The DHS already collects social media data on visa applicants, so it’s feasible that information could be included in determining their contribution “score”. Does criticising the US government make someone more or less likely to contribute? What if they “like” more left-leaning than right-leaning content? What if they’re friends with someone deemed “radical”? Because such predictive software would be proprietary, the public would likely never know what the algorithm is using to make decisions.

As the letter continues, while algorithms allow processing at an unprecedented scale – millions would be impacted by ICE’s automated vetting process – operating accurately at that scale isn’t feasible:

[T]here is a wealth of literature demonstrating that even the “best” automated decision making models generate an unacceptable number of errors when predicting rare events. On the scale of the American population and immigration rates, criminal acts are relatively rare, and terrorist acts are extremely rare. The frequency of individuals’ “contribut[ing] to national interests” is unknown. As a result, even the most accurate possible model would generate a very large number of false positives – innocent individuals falsely identified as presenting a risk of crime or terrorism who would face serious repercussions not connected to their real level of risk.

There’s no reliable way to predict criminality, terroristic leanings, or likelihood of contributing to society – especially not at a scale feasible for everyone seeking to immigrate to the US.

Gizmodo has reached out to IBM for comment but had not heard back at time of writing.

[Reuters, The Intercept]


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.