Police badgers and uniforms

AI and the future of policing: algorithms on the beat

Image credit: Getty Images

Police use of artificial intelligence has attracted controversy in the USA and now it is increasingly being marketed in the UK. Could machine learning and algorithms actually help make the criminal justice system fairer?

There are now so many businesses trying to sell what purports to be artificial intelligence (AI) to UK law enforcement agencies that you might be forgiven for wondering whether British police might soon need to develop their own algorithms just to work out which products are worth having and which are less so.

Palantir, Ripjar and LexisNexis are among the talked-about names known to be marketing their data-fuelled systems to the police and security services. Home Secretary Amber Rudd is hoping to harness the power of data analytics to zero-in on stabbing hotspots, American company PredPol has had its predictive technology trialled by Kent Police, while Durham Constabulary has used a machine-learning system to assess the risk of some individuals reoffending.

The UK’s most senior counter-terrorism officer has hinted at an increasing role for AI in monitoring tens of thousands of people on terrorism watch lists. The technology could also aid investigations into organised crime by helping construct maps showing relationships between criminal players.

There have also been rumours that neural networks are being fired up to make links between facial photographs held on custody databases and elements identified in illegal images recovered from paedophiles’ seized devices (if true, the police IT teams had better be quick about it, as the UK’s Biometrics Commissioner has warned them that many of the 20 million custody photographs currently stored on their systems are being held unlawfully and might need to be destroyed).

Meanwhile, computational models built at Edinburgh University last year learnt to solve fictional ‘whodunnit’ mysteries by binge-watching episodes of TV show ‘CSI: Crime Scene Investigation’ – and they were promptly outperformed by humans. Dr Lea Frermann, one of that study’s authors, remarks: “Overall, it’s very clear that humans are better at this task than our computerised models.”

Undoubtedly, AI has become a de rigueur technology acronym – one that is now used to describe such a wide range of techniques that it risks becoming linguistically meaningless – but gathering precise data about the on-the-market AI products themselves is another matter. Palantir, for one, is famously publicity-shy. The influential CIA-funded company, founded by billionaire Peter Thiel and named after a fictional crystal-ball-type object from the ‘Lord of the Rings’ trilogy, failed to respond to my requests for an interview. It reportedly works with the UK government’s listening station, GCHQ, which in turn partners with MI6 and MI5, so perhaps the secret service approach has rubbed off. “Palantir has a large London office. Draw your own conclusions,” remarks one industry insider when I ask how deeply the company is involved in the UK.

Its seven-storey office is indeed spacious. It apparently boasts a scooter track on each floor. A Star Wars stormtrooper and a crusader-style medieval knight stand sentry near the front entrance, these life-size models just about visible to nosey passers-by. There is no sign advertising Palantir’s presence, but its address is easily obtainable via an internet search. In what seems like a metaphor for its ‘hush hush’ attitude, Palantir recently applied to Westminster Council for permission to install a stainless steel security grille over part of the building’s frontage. The council turned down the request, saying it would create a visual obstruction, which, from Palantir’s perspective, was presumably the whole point. Archival Freedom of Information documents suggest the company has also maintained a discreet office in Cheltenham, not far from GCHQ’s doughnut-shaped base.

Not all firms operating in the fields of big data and security are quite this eager to draw a veil of secrecy over themselves, but even when they do not actively try and dodge scrutiny, the results can be unenlightening. At a counterterrorism event earlier this year, I asked a salesman for one AI firm whether he actually understood how the products he was marketing worked. “To the extent that anyone without two degrees in mathematics does, yes,” was his less-than-confident reply. He then relayed the standard explanation – namely, that with sophisticated algorithms and the right data in enormous quantities, it will be possible to make increasingly accurate predictions. The technology was, he stressed, a “black box”, its logic largely inscrutable to mere mortals, but it could be provided to law enforcement agencies on a ‘try before you buy’ basis. So much for transparency!

Then there are the disconcerting comparisons that seem tailor-made to confirm every sinister stereotype about AI. Sean Bair works for American crime-prediction business Bair Analytics, and he spoke to me a few years ago during a visit to London, between meetings with government figures including personnel from the Cabinet Office. His firm’s software was, he suggested in a reference to the 2002 film ‘Minority Report’, “almost like PreCrime”. ‘Minority Report’ was based on a book by the sci-fi novelist Philip K Dick. In the story, police can see into the future and can arrest would-be-miscreants and incarcerate them before they have had the chance to commit crimes.

In reality, the situation as regards UK policing is far more prosaic. “This is not ‘Minority Report’,” says Peter Neyroud, a former police chief turned University of Cambridge academic. “We are not being given glimpses of the future – although that might be quite useful, actually.”

Neyroud is referring to an algorithmic forecasting tool that has been used in Durham, north-east England, to make some custody decisions, but he questions whether such assessments really amount to anything new. “Triage has quite a long history going back to the 19th century,” he says, before conceding there is a “complicated debate” ongoing about big data and policing.

He is adamant, however, that “the one thing this is not is artificial intelligence – there is absolutely nothing artificial about the intelligence.

“At each and every stage of this [Durham’s system], there are human judgements about the system in place: judgements about at what point in the criminal justice system to insert it, and what the impact of using it is, as well as about the constraints on that and the discretion options available to custody officers.”

However, Neyroud believes there are legitimate concerns about some of the for-profit companies now trying to break into the UK market.

“If you’re going to make decisions of such importance to public protection and people’s lives, the algorithm has to be transparent,” he says. “Personally, I think the algorithms have to be built by the public authority and be under the control of the public authority.

“They can have support from an outside party, but what we can’t have is somebody trotting in and flogging their wares without the public authority being able to scrutinise the guts of the black box.”

‘If you can accurately predict somebody who is likely to commit a serious harm event, I think the public would expect us as the police to do something about that.’

Peter Neyroud, University of Cambridge

Use of predictive policing and offender management algorithms is far more widespread in the USA than in the UK. Across the Atlantic, such technology is increasingly being mired in controversy amid claims it unfairly penalises black people and those from deprived backgrounds.

In 2016, investigative journalism website ProPublica published a detailed analysis showing how Compas, an algorithmic tool used for predicting defendants’ reoffending rates, was disproportionately likely to misclassify black defendants as future criminals. The technology was demonstrated as having wrongly labelled them in this way at almost twice the rate of their white counterparts. Northpointe, the company behind Compas, disputed ProPublica’s findings, criticised its methodology and defended the accuracy of its product’s predictions, but the site’s analysis prompted other investigations into police AI.

Last year, the San Francisco-based Human Rights Data Analysis Group released the results of a similar study that examined how well PredPol’s crime-mapping program performed when tasked with predicting rates of drug use in different areas of Oakland, one of the cities in the San Francisco Bay area. Crime maps produced by PredPol’s software, which relies on police data, were compared against maps showing estimated drug use based on a combination of public health survey information and demography statistics. While maps of the latter type revealed drug use in Oakland was likely to be fairly widespread across all neighbourhoods and classes, PredPol’s maps suggested it was concentrated mostly in parts of the city known to be home to mostly non-white and low-income residents.

The study’s authors, William Isaac and Kristian Lum, have warned that predictive policing technology could result in a ‘feedback loop’ developing. If police choose to concentrate many new patrols in areas that have historically been subject to their attentions, they may be more likely to record more crimes as taking place in those areas compared with in neighbouring ones. This in turn could lead to yet more resources being deployed in these same, historically ‘overpoliced’, areas, thereby resulting in more crimes again being recorded there, and so on ad infinitum. In other words, the software may at best merely tell police what they already know and at worst it could end up reinforcing discriminatory habits.

Critics have questioned the appropriateness of leveraging drug crime data to generate forecasts using PredPol’s products. They point out that this is not, in fact, what the software was designed for. However, Isaac and Lum counter that, as the US government is considering widening the scope of place-based predictive policing to include drugs, their conclusions stand. They also press the case for greater transparency from industry, saying their study was only possible because researchers associated with PredPol published an academic paper that included their algorithm.

Lack of openness has been an issue in New Orleans too. The existence of a hitherto unknown partnership between that city’s police department and Palantir was revealed earlier this year by an investigative journalist who questioned why the tie-up had never been properly disclosed. As part of its relationship with the New Orleans Police Department (NOPD), Palantir reportedly deployed a predictive policing system without even members of the city council knowing about it. True to form, the tech company chose not to respond to that journalist’s questions about its relationship with NOPD.

Ana Muñiz, assistant professor of criminology at the University of California, thinks institutional secrecy is an impediment to winning public support for police use of AI.

“The algorithms are proprietary much of the time, because they’re owned by private companies,” she says. “There is the issue of the ability to obscure the process when there are contracts with a private company, versus when the government is developing something.

“It’s difficult enough to get information when it’s just the government involved, because it’s privileged law enforcement information that could compromise an investigation, but then when you add in these private entities which have this proprietary knowledge, that adds another layer of non-transparency.”

Marion Oswald, an academic who has advised the UK government on legal issues with IT, says cultural differences have opened up a gulf between American and British models of law enforcement. “In America, they’ve certainly gone down the route of using them [AI-type technology products] in ways that I don’t think would be deemed acceptable here,” she says.

However, she cautions against dismissing algorithmic decision-making out of hand and thinks it would be wrong to deny the machines the benefit of potentially sensitive data. “At a recent presentation I gave, somebody raised the issue of gender,” she says. “They said: ‘Having the sex of the offender considered as a factor in police custody decisions, isn’t that discrimination on the grounds of gender?’

“I replied that if you took it out then you’d be assuming there is the exact equivalent level of offending by women as by men, and isn’t that then discriminating against women?

“In fact, you’re much more likely to commit a serious offence if you’re a man than if you’re a woman. It’s not always easy, therefore, to say that simply because there is a particular bit of information – about the location where someone lives, say – then therefore that’s discrimination. You’d have to do more analysis of the impact.”

There are well-evidenced links between poverty and particular types of crime, meaning it may make good sense for police to deploy more resources to the most deprived areas. Likewise, it is at least theoretically possible that certain types of crime may be disproportionately committed by people with particular backgrounds or who belong to specific cultural communities. IBM, one of the biggest players in the field of AI, has acknowledged such systems often display bias but says that it will ultimately be possible to curb the flaws and thereby use technology to reduce societal discrimination. The company says it has developed systems to detect bias in AI. Algorithms to police algorithms, so to speak.

According to Peter Neyroud, it is important not to get stymied by the issue of whether people from particular postcodes might be more affected than others by algorithmic decision-making.

“If the system is likely to predict your strong likelihood of committing a murder, a serious assault or rape, does it really matter where you’re born or where you’ve come from?” he asks. “I don’t think it does. I think from the public’s point of view, protection from those types of crimes would override that particular difficulty.

“If you can accurately predict somebody who is likely to commit a serious harm event, I think the public would expect us as the police to do something about that.”

Some believe the best approach to countering misgivings might be to embrace AI and engineer it to become more human.

As Fei-Fei Li, an associate professor of computer science at Stanford University, puts it: “Despite its name, there is nothing ‘artificial’ about this technology – it is made by humans, intended to behave like humans and affects humans.”

Consistent decisions

Alexander Babuta from British defence think-tank the Royal United Services Institute also believes there can be too much focus on bias.

“We seem to be holding the machines to a higher standard than human decision-makers,” he says. “We want them to be completely transparent, we want them to be completely free from bias and we want them to be supremely accurate every time, but all decision-making is partially unknowable. The question has to be, is the machine any worse or better than our current system?”

Human judgement is, he says, influenced by a multitude of irrelevant factors ranging from mood to weather to time of day or fluctuations in attention and the random nature of neuronal firing. Inevitably, this results in people in positions of authority making different judgements at different times, even when faced with identical scenarios. Lack of consistency, in other words. That is something people have hitherto been prepared to put up with, to an extent, perhaps partly because they have had no choice. AI changes that, however, because the algorithms’ decisions are consistent when faced with the same scenarios, even if they may still be biased.

Babuta believes the most pressing ethical dilemma surrounding AI pertains to the legal principle of mens rea, or ‘guilty mind’. According to that principle, one must be conscious of what one is doing in order to be brought to book. Machines might play a role in holding us accountable for our actions, but how might we hold them accountable for theirs?

The truth is that AI’s biggest selling point - the fact that it is not human - might also be its greatest drawback.

Durham’s HART model

Case study

Durham Constabulary pursues a scheme intended to divert relatively low-risk offenders away from prison by offering them the option of having prosecution deferred provided they agree to various structured interventions, such as treatment for drug or alcohol addiction.

In order to decide who might be suitable for this scheme, the force has used the Harm Assessment Risk Tool (HART), an algorithmic model developed at Cambridge University that can give guidance to custody officers.

HART predicts an individual’s risk of reoffending based on 34 variables, most of which focus on prior history of criminal behaviour. The individual’s age, gender and where they live – but not ethnicity – are also taken into account. A ‘decision tree’ system means no one factor can have an overwhelming impact on the result.

The aim of the process is to sort people into ‘high risk’ (at risk of committing a new serious offence, i.e. murder, attempted murder, grievous bodily harm, robbery, a sexual offence or a firearms offence), ‘medium risk’ (at risk of committing a new offence defined as ‘non-serious’) and ‘low risk’ (no new offending of any kind). Real-​world outcomes are monitored against the software predictions. According to one academic involved in trials of HART, the system is currently running at around 90 per cent accuracy. Officers are advised to continue using discretion rather than blindly following what the computer says.

The HART tool attracted controversy after a draft academic paper published by the algorithm’s creators appeared to indicate concerns about the use of postcode data and recommended this ‘predictor value’ should be removed from the mix.  

That paper also stated: “The HART model intentionally favours (i.e. applies a lower cost to) cautious errors, where the offenders’ levels of risk are overestimated. Underestimates of the offenders’ actual risk levels, referred to as dangerous errors, are assigned a higher cost and therefore occur less frequently.”

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles