nypd
A New York Police Department officer watches video feeds in the Lower Manhattan Security Initiative facility in New York September 1, 2011. Reuters/Lucas Jackson

 

Police departments in major US cities have been trying to forestall violent crime by mining data and using algorithms to predict when and where crimes are likely to take place — and who is likely to commit them.

“Predictive policing” software is now employed by at least 20 of the country’s 50 largest police departments, including New York City, Chicago, and Los Angeles. But advocates of criminal justice reform say those practices could lead to racial profiling and aggressive policing.

A sweeping coalition of civil rights groups and technology research organizations recently issued a damning critique of the predictive policing, arguing that the software frequently used biased or incomplete data to justify boosting law-enforcement activities in already over-policed areas.

At a time when police departments nationwide are being scrutinized over controversial policing strategy, these crime-forecasting tools will only worsen discriminatory practices, the groups argued in a shared statement.

“Predictive policing tools threaten to provide a misleading and undeserved imprimatur of impartiality for an institution that desperately needs fundamental change,” the statement read.

Among the 17 signatories were the American Civil Liberties Union, the NAACP, the Brennan Center for Justice, and New America’s Open Technology Institute.

It’s not the first time in recent weeks that crime-forecasting tools have come under fire — in August, Chicago’s use of predictive policing was roundly criticized after a report found its program had no effect on the city’s homicide trend.

The police department had developed a “Strategic Subjects List” compiling names of people thought to be at the highest risk of being involved in gun violence, but the report found that those named on the list weren’t more likely to be victims of a shooting, and many became needless targets of police suspicion as a result.

Chicago’s police department responded that the report’s findings are “no longer relevant,” as the prediction model has been updated several times and “significantly improved” since the RAND Corporartion examined the program.

Chicago police shootings
A protester walks past a line of police officers standing guard in front of the District 1 police headquarters in Chicago, Illinois November 24, 2015. REUTERS/Frank Polich

 

In one sense, predictive policing has been used by police departments long before computers and algorithms existed, according to University of Michigan data scientist, H.V. Jagadish, in an interview with Business Insider.

Police departments have always attempted to more efficiently direct their resources by making predictions based on data and prior experience. At public events, for example, departments will predict how large a crowd will be, the likelihood of a potential crime being committed, and how many officers should be assigned to the location.

The problem with predictive policing algorithms, Jagadish said, is that police departments haven’t struck the right balance between more efficiently targeting crime and avoiding civil-liberties conflicts — nor have they demonstrated a clear understanding that algorithms suggest probabilities, not certainties.

It’s not sensible to ask police departments to avoid using big data in their crime prevention strategies — but it is necessary that they understand the human impact of the tools they’re using, Jagadish said.

“You can’t say, ‘don’t look for patterns, don’t do anything,'” he added.

“With predictive policing we have significant civil liberties issues that arise very quickly. And I think that, at least initially, many police departments just saw the upside of that and didn’t think so much.”

As reported by Business Insider