The Brennan Centre has released a cache of 6,000 pages of Los Angeles Police Department documents, obtained by the California Public Records Act, detailing years of surveillance campaigns. The collection is a critical resource for activists, journalists, and First Amendment advocates who’ve been collecting evidence that the LAPD habitually spies on residents, particularly on activists and people of colour with records or, simply, “gang affiliations.” It also builds a strong case that police have way too many resources and time on their hands.
The expansive view into police use of social media, which spans from 2009 into 2021, shows cops covertly posing with pseudonyms and using predictive policing to monitor people they suspect are likely to commit crimes, in some cases with almost no oversight. New documents show that the department plans to roll out a program that monitors connections between individuals.
First, the LAPD has been watching virtually every peephole. An excerpt from a 2010 intelligence-gathering manual states that officers should gather information during “almost any incident,” achieved by several divisions monitoring Twitter, Facebook, “internet chats and postings,” radio frequencies, virtually all video cameras operated by the LAPD, as well as the FBI and other agencies. It adds that scouts should collect information “no matter the source.”
In some cases, monitoring Twitter involved monitoring a particular person, rather than just an event; police attempted to get people to hand over their social media accounts along with identifying information. A standard Field Interview (FI) card, a form officers fill out when stopping a person, leaves a blank field for Instagram, Facebook, Twitter, and other profiles. At the bottom, another asks for a Social Security number, with a notice stating that officers can order the subject to give it over by law. In 2020, Chief of Police Michel Moore instructed officers via email that information on the FI cards should be recorded “thoroughly,” and that the department would review them for “completeness.”
Other documents more explicitly reveal that officers are gathering information even when there’s no crime, i.e., using predictive policing, which the LAPD embraced early. Predictive policing measures pop up in a 2016 description of the LAPD’s Computer Statistics (COMPSTAT) wing, which is tasked with monitoring “crime analysis products” that assess “crime series, patterns, hot spots, trends, clusters, spikes and/or offenders for the purpose of arresting and prosecuting criminals.”
Predictive policing uses an amassment of crime statistics to monitor location-based “hot spots” and also information used to mark people with criminal records but also histories of drug use. Research has repeatedly found that predictive policing tools further entrench racist policing practices. The LAPD stopped using the program LASER, for example, in 2019 after the inspector general found that among other things, the program allowed police to stop people on the “Chronic Offenders” list merely for being on the list, and a name could end up on the list for fuzzily-defined reasons such as supposed gang affiliation. It was also unclear whether someone ended up on the list even if they were arrested but weren’t convicted of a crime. It also ended the use of PredPol, a popular location-based predictive policing tool, in 2020 on the grounds that the department was “belt-tightening” amid the covid-19 pandemic.
The use of predictive algorithms in criminal justice has also been found to perpetuate racial profiling and can baselessly lengthen arrests records. Longer records are considered by more racist predictive programs that assess recidivism likelihood and lengthen prison sentences.
As the Brennan Centre notes, COMPSTAT now uses Palantir software. The company offers the ability to map out online connections, explicitly including “gang” networks, which could extend more thin reasoning to zero in on innocent people. The FI cards were also added to the Palantir database, according to a September 2020 BuzzFeed report.
This will come as no surprise to activists, but more evidence reveals an interest in tracking crowds. A 2016 email chain shows an LAPD officer telling a representative from Dataminr that they planned to use the social media monitoring software at the May Day protest; last year, the Intercept reported that Dataminr compiled reams of predictive data on Black Lives Matter protesters’ plans and locations. Knowing where a crowd plans to be helps police head off and kettle protesters, creating a wall of officers to trap marchers and, in the LAPD’s case, allegedly attack them with riot weapons, round them up, and zip-tie them. The LAPD also had trial uses with Skopenow and demonstrably used Geofeedia, the released documents show. While the department reportedly stopped using Geofeedia after Facebook and Twitter blocked access to the software, an undated document shows that they’d used a list of Geofeedia search terms including #BlackLivesMatter, Tamir Rice, and #SayHerName.
Further, the LAPD appears to be expanding its powers. This year, it bought a licence for Media Sonar, which told the LAPD that, per the department’s request, it can enable “open-source searching of individuals.” In a lengthy slide presentation, Media Sonar promised that it would track communications, build profiles with identifying networks of their connections for “unexpected leads,” permanently store posts even after they’ve been deleted, and track “gang/drug/weapon” slang. It can even “learn relevant slang and names,” which, as Brennan Centre fellow Mary Pat Dwyer points out in a breakdown, “the highly contextual nature of social media … makes it ripe for misinterpretation.”
That same explanation was used by an LAPD sergeant who was disciplined for posting, among other racist rants, that Nipsey Hussle “chose the lifestyle that ultimately killed him” because he “perpetuated the criminal gang lifestyle.” The department has since implemented a rule that prohibits officers from using police insignia along with harassing and racist posts, but if it wants to connect their Social Security numbers with their social accounts and monitor them for “slang,” it certainly could. (The Brennan Centre has also compiled a catalogue of 35 police departments’ social media policies, which it will continue to update.)
The trove holds exceptional value because we often get only piecemeal clues as to what surveillance methods police departments are using until after it’s too late. Last year, BuzzFeed News confronted the LAPD with evidence that it had used Clearview AI’s facial recognition technology, which has scraped unnumbered photos from social media without users’ consent. The LAPD banned the use of facial recognition after BuzzFeed’s inquiries, but the outlet discovered that officers had already used the software to conduct 475 searches over a three-month period.