Overreliance on violent risk scores for first responders can lead to false positives. That’s the concern I had as I read the recent article in the Washington Post on January 10, 2016 by Justin Jouvenal, “The New Way Police Are Surveilling You: Calculating Your Threat Score,” which poses interesting dynamics for first responders that may lead to a false sense of security.
A Bright Red Warning?
The article cites the use by the Fresno Police Department in California of police analytics, a software system that scores a suspect’s potential for violence “the way a bank might run a credit report.” One case was cited wherein the program scoured billions of data points, including arrest reports, property records, commercial databases, deep Web searches and the man’s social-media postings. It calculated his threat level as the highest of three color-coded scores: a bright red warning.
An Efficiency Tool – With Critical Vulnerabilities
Some law enforcement officials say such tools allow them to do more with less, and they have credited the technology with providing breaks in many cases. The Virginia State Police found the man who killed a TV news crew during a live broadcast last year after his license plate was captured by a reader (See my blog “Workplace Terminations: The Security Risks Few Companies Consider in Time.)
But the powerful systems have also become flash points for civil libertarians and activists, who say they represent a troubling intrusion on privacy, have been deployed with little public oversight and have potential for abuse or error. Some say laws are needed to protect the public.
Issues for Concern
As officers respond to calls, the software automatically runs the address. The searches return the names of residents and scans them against a range of publicly available data to generate a color-coded threat level for each person or address: green, yellow or red.
The software maker will not divulge how the score is calculated. It considers its actuarial formula a “trade secret.” That makes sense from a business angle. But not from other perspectives. We don’t know, for example, how much weighting is applied to a misdemeanor, felony or threatening comment on Facebook.
It’s not hard to agree with Fresno Police Chief Dyer that his “officers are expected to know the unknown and see the unseen,” Dyer said. “They are making split-second decisions based on limited facts. The more you can provide in terms of intelligence and video, the more safely you can respond to calls.”
A Brief Example
But the fact that only the software maker — not the police or the public — knows how a score is derived, might mistakenly increase someone’s threat level due to demographics such as living in a high-crime area or lower socio-economic environment, or the software’s misinterpretation of language and semantics used in a social media posting.
As an example of a mistaken threat level, a Fresno council member referred to a local media report saying that a woman’s threat level was elevated because she was tweeting about a card game titled “Rage,” which could be a keyword in the software’s assessment of social media.
Technology Can’t Determine Motive, Intent or Capacity to Commit Violence
I recognize the need for instantaneous information for a responding police officer, but our firm uses behavior-based methods in our threat assessment processes in determining whether someone has the motive, intent or capacity of committing an act of violence. Our experience has determined that in assessing a risk of violence, “automated decision making” to assess risk of targeted violence has several shortcomings. Sound empirical research on violence needs to be the basis for any risk assessment. Any equation derived from empirically researched risk factors needs to be sufficiently sensitive to minimize the number of false negatives on law-abiding citizens. In other words, where a person lives does not necessarily mean they are law-breakers.
Private-Sector Software and Public Sector Intelligence
Another concern I have about the software being a “trade secret” is whether the latest counter-terrorism information disseminated to law enforcement by federal intelligence gathering agencies may reveal warning signs of behavior. A recent federal intelligence update highlighted the importance of educating families, teachers, religious leaders, communities, and the private sector on the most common indicators of violent extremism, as well as on the need to alert law enforcement about individuals exhibiting such behaviors or activities. It is highly doubtful that software systems controlled by private entities will be up-to-date on what is coming out of the intelligence community. The latest counter-terrorism updates provide officials with an emerging picture of distinct behaviors often associated with an individual mobilizing for violence.
The use of software based on unknown data points has the potential to be ineffective with a considerable risk of false positives, has a potential for bias, and has been sharply criticized in the past for its potential to stigmatize citizens and deprive them of civil liberties.
Our firm fully endorses enhanced information sharing systems for police departments. It is imperative to have the latest technologies. But algorithms controlled and kept secret by the private sector may not be in the best interest of who needs the information the most; a police officer responding to a potentially violent situation.
Civil liberties groups, communities, federal state and local law enforcement, the U.S. intelligence community all have a moral obligation to provide the police officer with accurate information to protect their lives and those of our citizenry.
I welcome a public discussion. That’s what is needed.
 Evaluating Risk for Targeted Violence In Schools: Comparing Risk Assessment, Threat Assessment, and Other Approaches. Reddy Et Al. 2001.