U.S. Justice Department Admits: We Don’t Even Know How Many Predictive Policing Tools We’ve Funded

U.S. Justice Department Admits: We Don’t Even Know How Many Predictive Policing Tools We’ve Funded
Police car's lights during a protest. (Photo: Getty (Royalty-free), Getty Images)

Several of the nation’s largest cities use federal tax dollars to fund the development of software promising to predict the locations of future crimes. They’ve done so for the better part of a decade. For the first time, though, the agency overseeing the distribution of that money has formally acknowledged it has little idea of the ways the cash has been used.

Department of Justice (DOJ) officials responsible for doling out grants to state and local law enforcement agencies have kept no “specific records” of which police departments have been working with this technology.

A group of Democratic members of Congress first requested a complete list of police departments that received grants to test or implement crime forecasting algorithms in April 2020. The DOJ’s response, obtained exclusively by Gizmodo, not only failed to provide a full accounting of which cities have used federal money to pay for so-called “predictive policing” software, but also neglected to answer lawmakers’ basic questions about whether such tools have ever been assessed by the department to ensure compliance with civil rights laws. One senior senator has expressed outrage over the gaps in the DOJ’s knowledge of its own distribution of taxpayer dollars.

“If the Justice Department doesn’t have better answers than this,” Sen. Ron Wyden, Democrat of Oregon, told Gizmodo, “Congress should debate whether these programs should be allowed at all, let alone funded by taxpayers.” The senator’s office has been working since January to set up a followup brief at the DOJ, but has been unsuccessful so far. The Justice Department did not respond when asked for comment.

The DOJ’s response follows an 18-month joint-investigation by Gizmodo and The Markup into PredPol, a California-based predictive policing company recently renamed Geolitica. The investigation relied on more than 7 million crime predictions from dozens of U.S. cities discovered by Gizmodo in the summer of 2020 on an unsecured Amazon server. While limitations in available crime data prevented us from determining PredPol’s impact on local crime rates, our analysis revealed that the software had overwhelmingly targeted predominantly Black and Latino neighbourhoods. In a majority of jurisdictions where data was available, the poorest residents of those cities were also targeted, often relentlessly. The software predicted crimes in low-income neighbourhoods every day, often multiple times. Our analysis concluded that when fewer White residents lived in an area, PredPol was more likely to predict a crime there. The same was true of neighbourhoods with the fewest wealthy residents. (PredPol CEO Brian MacDonald disputed the findings, claiming — without explanation — that Gizmodo’s data was “erroneous” and “incomplete.” PredPol did not, however, request any factual corrections following publication of the story.)

Such tools — which rely on historical crime data analysed by algorithms created by companies such Oracle and IBM — are increasingly automating decisions around which communities are most frequently monitored by police on patrol. Certain products are not limited to labelling only neighbourhoods as potential criminal “hot spots,” but also pin specific people as possible suspects in crimes that have yet to be committed a la Minority Report.

The Democratic lawmakers first informed U.S. Attorney General Merrick Garland that they’d grown “deeply concerned” over the unchecked expansion of predictive policing in April 2020. They set a May 2021 deadline for the DOJ to write back. When a written response to the lawmakers’ inquiries finally arrived in January of this year — seven months late — they found that Garland and his deputies had seemingly ignored a majority of their questions.

Writing to Garland, the lawmakers attached a list of more than a dozen questions meant to clarify basic facts about the DOJ’s funding of AI-driven software. They sought to learn, for instance, which state and local agencies had specifically used predictive policing tools developed or purchased on the taxpayer’s dime. Further, they sought to learn whether the DOJ had any requirement for such tools to be “tested for efficacy, validity, reliability, and bias”.

The DOJ’s letter, signed by Acting Assistant Attorney General Peter S. Hyun, begins by vaguely acknowledging that the nationwide use of predictive policing had given rise to “complex questions.” While Hyun claimed that the government remains “steadfastly committed” to safeguarding Americans’ civil rights with regard to such data-driven tools, his assurances failed to impress privacy-conscious Wyden, the chief lawmaker behind the inquiry into the department’s funding policies and the chairman of the powerful Senate Finance Committee.

Assistant AG Hyun stated that funding for the development of predictive policing technology had principally come from two sources. One source, known as the Edward Byrne Memorial Justice Assistance Grant Program (JAG) — named for a New York City police officer murdered in 1988 — appears to disburse grants under conditions far less than stringent than the other. The Justice Department describes JAG as the nation’s “leading source” of criminal justice funding. According to Hyun, the DOJ does not keep track of which JAG recipients are spending grants purchasing or developing predictive policing technology.

“BJA does not have specific records that confirm the exact number of grantees and subgrantees within the Edward Byrne Memorial Justice Assistance Grant (JAG) Formula Program that use predictive policing,” Hyun said.

Despite the Justice Department’s uncertainty, some of that money was, in fact, spent on predictive policing. The BJA managed to identify at least five U.S. cities that have used grants to pay for predictive policing since 2015, including Bellingham, Washington; Temple and Ocala, Florida; and Alhambra and Fremont, California. In the case of Temple, Hyun wrote, the funding was used to “identify targets for police intervention.”

In the cities identified by the BJA, grant amounts ranged from $US12,805 ($17,776) to $US774,808 ($1,075,588), the latter being used to purchase a “predictive analytics software solution,” which BJA referred to as “PEN Registers.” (It was not immediately clear whether this is actually the name of a real predictive policing tool; a “pen register” is a police surveillance device that captures phone numbers called from a particular phone line.)

Unlike JAG grants, the second source of funding, a competitive grant program run by the Bureau of Justice Assistance (BJA) known as the Smart Policing Initiative (SPI), comes with various stipulations intended to ensure projects are achieving intended results in accordance with “best practices”. SPI-funded projects, which have included predictive policing initiatives in Los Angeles, Chicago, and Baton Rogue, are evaluated by researchers who, according to Hyun, are responsible for gauging their impact on civil rights.

In an email, Wyden said he has been unable to obtain even basic information about the federal government’s role in advancing privately developed software intended to forecast crime. As a result, he now says the time may have come for Congress to contemplate a ban on predictive policing, a technology long unpopular with civil rights groups and police accountability advocates.

“It is unfortunate that the Justice Department chose not to answer the majority of my questions about federal funding for predictive policing programs,” he said. His letter to Garland was co-signed by six Democratic colleagues — Senators Ed Markey of Massachusetts, Alex Padilla of California, Raphael Warnock of Georgia, and Jeff Merkley of Oregon, as well as Representatives. Yvette Clarke of New York and Shelia Jackson Lee of Texas.

The letter by Wyden and his colleagues stated that algorithms deployed to help automate police decisions have not only suffered from a lack of meaningful oversight, but have been described by academic experts as amplifying long-held racial biases among the nation’s police forces. What’s more, some predictive algorithms might not even do the job for which they were created: multiple audits have found “no evidence they are effective at preventing crime,” the lawmakers said.

The lawmakers wrote that predictive algorithms “may amount to violations of citizens’ constitutional rights to equal protection and due process under the law,” adding it’s possible the technologies may even “violate the presumption of innocence,” long held as basic requirement in the U.S. for a fair trial.

An internal evaluation by the Los Angeles Police Department in 2019 found, for example, that police strategies relying on AI-driven tools lacked sufficient supervision and “often strayed from their stated goals.” Over the past decade, the LAPD has employed a range of predictive tools used not only to forecast locations of where crimes will purportedly occur, but to generate names of L.A. residents who essentially become suspects of crimes yet to be committed.

Some predictive policing tools are modelled on police departments’ worst behaviour. A study published out of New York University in 2019, for instance, revealed that nine police agencies had fed the software data generated “during periods when the department was found to have engaged in various forms of unlawful and biased police practices.” The same researchers noted that they’d observed “few, if any, efforts by police departments or predictive system vendors to adequately assess, mitigate, or provide assurances.”

Assistant AG Hyun went on to note that DOJ had previously held two symposia to discuss predictive policing, one in 2009 and another in 2010, and had funded the development of a reference guide for agencies interested in predictive policing released in 2013 by the RAND Corporation.

Both RAND and experts who took part in the symposia foretold the issues the technology would encounter nearly a decade ago. Symposium members noted, for instance, that American police had a “rich history” of privacy-related problems that “have yet to be resolved.” RAND, meanwhile, noted that police partnerships with private companies may allow law enforcement to skirt constitutional safeguards against the collection of private data, writing, “The Fourth Amendment provides little to no protection for data that are stored by third parties.” Very few departments using predictive tools, RAND said, had actually evaluated the “effectiveness of the predictions” or the “interventions developed in response to their predictions.”

Despite Hyun acknowledging that the DOJ had funded predictive tools used to cast suspicion on specific individuals, the guide included in his letter appears to warn against it, stating “fewer problems” would arise from location-based targets.