Pre-Crime: Has Minority Report-style Policing Become a Reality?
Police are using technology and data to predict behaviour and prevent crime – but are innocent people being targeted?
Editor’s note: This film is no longer available online.
What if you could predict a crime before it happens?
It’s the stuff of science fiction – the film Minority Report, set in the year 2054, imagines police officers that arrest individuals for crimes not yet committed.
Their fictional department is called “Pre-Crime”. And it might quickly be becoming a reality.
Police in the United States and the United Kingdom are using mathematical algorithms, which rely on data collected from social media, surveillance cameras, and police records, to take what they say are proactive measures against crime. They use software to generate subject lists of people who are at the greatest risk of being party to violence or committing a crime.
Robert McDaniel, a Chicago resident, found himself on such a list.
He had close ties to a victim of violence and was rated 215 times more likely to be prone to violence himself – despite having only been arrested for minor offences such as gambling and smoking marijuana. He says police began watching him closely.
“In the midst of me trying to get my GED I started getting followed home, started having police officers walk up on me, ride up on me, saying my name, govern my name, where I had been and just things like that,” he says. “The result of this test put us on a list of the 400 most dangerous people in Chicago. How can I be dangerous for smoking weed and shooting dice? Who does this hurt?”
It's indicative of how our society is progressing away from human solidarity and a human approach, to just squeezing people as hard as you can, in any and every which way.
But for police, it’s a tool to tackle crime, inspired by the business models of tech giants like Google and Facebook to predict behaviour.
“If the algorithms used in the private sector have allowed them to become more successful in targeting their audience to sell product, then we should take advantage of that same algorithm, that allows us to become more successful in law enforcement in preventing crime,” says Jerry P Dyer, a police chief in Fresno, California.
But critics say Silicon Valley and law enforcement don’t mix, and “pre-crime” violates privacy, enables racial profiling, and heightens surveillance in poor neighbourhoods. And, apart from its developers, few know how the algorithms that generate subject lists work, making accountability difficult.
“The question is why we do that to ourselves, use those programmes. First off, We didn’t vote on it democratically. Is that what we want? Do we want to become a society, that is fully monitored, where everybody gets a score … ?” says Yvonne Hofstetter, an author and CEO of Teramark Technologies in Zolling, Germany.
“We do that because we want to make money. And the financial players, big technology giants, learned many years ago that you can make a lot of money with personal data and scoring of people, so that they made it a business model.”
And for London-based social worker Sukant Chandan, trends like these will only harm working-class, minority communities.
“If you can use some kind of predictive technology and software to predict, according to what these people do in the future, it’s not going to predict anything particularly positive for them,” he says.
“If you want to make money off a software to have algorithms for the police it’s indicative of how our society is progressing away from human solidarity and a human approach, to just squeezing people as hard as you can, in any and every which way.”