
How AI is being used to help fight crime
Image credit: Getty Images
Should we dread judgement from machines that find patterns in behaviour?
Notorious bank robber Willie Sutton was adamant he never gave the reason “because that’s where the money is” to explain why he robbed banks. The money was genuinely there but it wasn’t his real motivation. That turned out to be something more visceral: because he enjoyed the thrill of it.
That did not stop the fake reason being used to name one of our many informal societal ‘laws of nature’. Sutton’s Law states: ‘don’t ignore the obvious in diagnosis’. The question that police forces around the world have been investigating is whether they should look at crime the same way. Should they do more to focus on areas where crime is more likely in order to make the most of stretched resources? To find out how they could do that, some have enlisted the technologies of machine learning and artificial intelligence (AI).
Royal United Services Institute (RUSI) research fellow Alexander Babuta and University of Northumbria senior fellow Marion Oswald have tagged this use “austerity AI” in a chapter for a forthcoming book on policing and machine learning, because of the way that pressure to save costs is acting as a driver for adoption.
The government sees the attraction. At the conclusion of its Policing for the Future inquiry in 2018, the House of Commons Home Affairs select committee wrote: “There are enormous opportunities for policing, including greater use of artificial intelligence and the exploitation of data, but the service is often failing to take advantage of them.”
Yet take-up so far by the police does not suggest widespread enthusiasm. In 2018, campaigning group Liberty sent 90 Freedom of Information Act (FOI) requests to police forces across the UK. Fourteen of them said they had either used or had plans to trial predictive crime-mapping programs, individual risk-assessment software or both. Some employ a level of machine learning, though some were simple mapping applications and some did not last long. Cheshire Constabulary said its officers had employed a simple crime-mapping application called Operation Forewarn for ten months in 2015 but had no publicly disclosed successor or follow-up.
Kent Constabulary was an early adopter. It started trialling predictive-policing more than five years ago, but by the time the FOI from Liberty arrived the force decided to stop using the product, though it claimed to be working on an in-house system to save on licensing costs. The company claims that the predictive-policing software, which divides a map of an area into boxes, is able to predict twice as many crimes that will occur in each box over a given timeframe as when officers were asked to make predictions.
The key issue that worries criminologists is whether targeting policing in this way risks over-policing, particularly in relation to actual crime levels.
How reliable are forensic results?
In the land of fictional detectives, no one questions the forensic evidence. Back in the real world, things are different.
A series of miscarriages of justice caused by flawed forensics, especially in the USA, has drawn new attention to how unreliable forensic evidence can sometimes be.
A 2016 paper on bite-mark identification by a group of 40 dentistry and legal researchers noted there is a growing list of forensic techniques that have been rejected by the courts over the past 40 years because they relied so heavily on subjective opinion.
As well as distortion, a major problem is the tendency of forensics experts to find false positives. In other cases, the US National Academy of Sciences rejected ‘voiceprint’ analysis in 1979. Attempts to determine the origin of bullets by their metal composition was finally rejected 25 years later.
Then take a staple of crime dramas that has sealed in the public consciousness the idea that ballistics forensics are foolproof – rifling marks on fired bullets. University of Iowa statistics professor Alicia Carriquiry argues the judgments made by forensics experts in ballistics are, as with many of the discredited techniques, overly subjective.
One of her concerns is that decomposition changes the markings as the casing erodes. Another is, again, that forensics experts tend to lean towards false positives. They readily identify possible matches without taking into account the possibility that the similar markings might be coincidental.
To try to deal with the problem, Carriquiry’s team has collected bullet data from crime labs across the USA to build a catalogue of marks that they fed to a machine-learning algorithm.
As with any other area of machine learning, a major issue is collecting sufficient data to be sure that a match found by the system is not prone to finding the same kinds of false positives as human experts. But by bringing together much more data, the chances of that should fall.
TU Wien’s Impress is a project with a similar intent: its aim is to build a database of shoe imprints that can be used to train AI models for more reliable identification.
In other work, Sebastian Wärmländer of Stockholm University and colleagues have been developing machine-learning models to help determine the effects of heat on bone.
High temperatures do not just affect the DNA evidence that can be collected from bone but also the appearance of cut and impact marks, which can mislead forensic analysts and juries.
Although data science can be prone to bias, researchers in the field do expect that collecting more information will help improve the accuracy of forensics.
To investigate the risk of over-policing, Kristian Lum, lead statistician at the Human Rights Data Analysis Group, and PhD student William Isaac, built a model of the Californian city of Oakland to create a similar hot-spot map to those built by typical predictive-policing software for illicit drug use. But they included in one version data from crime surveys, which include incidents and patterns that do not show up in police records. The heat maps the two systems produced were quite different: what was known to police was a fraction of the estimated distribution.
Increased attention to hotspots identified by police records simply made them even more important to the algorithm. As one officer said in a 2019 RUSI report by Babuta and Oswald with Christine Rinik, senior lecturer at the University of Winchester: “We pile loads of resources into a certain area and it becomes a self-fulfilling prophecy, purely because there’s more policing going into that area, not necessarily because of discrimination on the part of officers.”
A further issue is whether the tools remove accountability from police decisions, an issue that concerns Annette Vestby of Norwegian Police College University and Jonas Vestby, working at the Peace Research Institute Oslo, in their research. They told E&T that systems presented as simply being for decision support can have an outsized effect on the choices their human users make. “A general piece of advice is to only use models that are widely viewed as unproblematic directly in decision-making and that more supporting information needs to be taken into account when models have issues,” they add. “There is a difference between using models to directly decide where to police [and using them] for strategic analytics by the police leadership or as part of research on the police.”
Although AI can be vulnerable to feedbacks loops in predictive policing, some researchers believe the technology has the potential to improve consistency in other areas of criminal justice such as bail hearings and sentencing. Jens Ludwig of the Chicago Crime Lab and colleagues compared the performance of automated systems to those of judges in an experimental system they argued could reduce New York City’s jail population by more than 40 per cent, thanks to the way in which they expected automated systems to focus on the pieces of data most relevant to reoffending and which tend to favour greater leniency on average.
At the same time, the use of statistical data captured from court records suffers from a similar issue to that of predictive policing, which makes researchers cautious about deploying the technology. In determining whether a defendant receives bail, real-world statistics will capture re-arrest events, not data that the applications’ designers want. In the absence of an oracle telling the system who has reoffended without being caught, any recidivism model will be working with incomplete data.
A second issue is one of uncertainty in the data when trying to apply it. Models used in sentencing systems are designed to create risk scores that are then used, in effect, as probabilities of reoffending. For a group, the two might align reasonably well. But for individuals the data may be less effective. In their report for RUSI, Oswald and colleagues argued data that seems accurate at the group level can often conceal very low accuracy rates for individuals.
“Allowing the data to lead to conclusions based on statistical correlations certainly can be risky,” Oswald told E&T. “Do such conclusions make sense in the operational environment in which they are to be deployed? Is it legal or fair to use certain types of input data to draw conclusions about individuals? Policing data represents a limited picture of the past and caution needs to be exercised before using such datasets to make predictions about individuals.
“We should ensure that the human retains responsibility for the overall risk assessment. Many additional pieces of information are likely to be relevant to such an assessment, not just those factored into an algorithm.”
To try to provide the public with a greater degree of confidence in the use of software for tasks like predictive policing, some forces have enlisted academic support. But secrecy around the algorithms continues to be an issue. West Midlands Police asked the Alan Turing Institute to compile a report on the ethics of its plans for a data-analytics system, but with key parts redacted. The institute argued in its 2018 report: “The redactions have made it impossible to give a full appraisal of the [system] and especially of its solution architecture, which no doubt is the heart of the proffered solution.” However, the institute added it was encouraged by the attempt to build ethical objectives into the plan.
“We need clearer guidelines for conducting trials of new technology in policing and criminal justice, and clearer evaluation standards, not only for a system’s scientific validity but for its benefit (or otherwise) to the policing purpose and likely human rights impact in an operational environment,” says Oswald.
“As we argue in our paper, we believe quite a lot of oversight can be provided if the public know what data is used to train models, the learning goal of the system and how the system is implemented in decision-making,” the Vestbys add. “We believe there is an important case to be made for these aspects to be public knowledge, and to insist that, for example, domain expertise in other areas than machine learning or being part of an affected population are necessary and possible qualifications to participate in debates on implementation.
“We also think that the learning algorithm or model itself should undergo independent review by specialists, particularly when it is claimed that the system accounts for known biases.”
Private companies and criminal-justice agencies may try to resist attempts for greater transparency in the use of AI. But, as a number of criminology and technology researchers point out, this should not get in the way of determining how AI is used in policing society.
AI goes where the money is
As with predictive policing, machine learning applied to the search for evidence has raised alarm bells. Big Brother Watch, for example, warned of the potential for AI to pull up combinations of documents that raise suspicion but which may have little to do with the case.
A less well-publicised application for AI, but one that is now beginning to be applied, lies in tracking the financing of crime as well as fraud and insider trading. The EU’s fifth anti-money-laundering directive came into effect in January 2020, and not only gives banks and others more access to ownership data, but more responsibilities to track how money is moved around.
Although its crime-department head Rob Gruppetta signalled caution in applying AI to the investigation of financial crime in a speech at Chatham House in 2018, the UK’s Financial Conduct Authority (FCA) has run several recent hackathons involving technical staff from various big banks and investment firms that focused on AI for spotting evidence of money laundering.
Much of the focus on anti-money-laundering AI so far has been less on keeping track of small payments intended to fly under the radar of banks’ and tax authorities’ normal checks as on the often Byzantine networks of companies and individuals that criminal gangs create to cover their tracks. The Russian Laundromat case involved more than 5,000 shell companies, 440 of them based in the UK, which collectively moved some £63bn of illicit wealth around the world.
Because governments expect banks to perform most of the checks, banks have the issue of how to trace networks that go beyond their own organisation in an environment where they cannot simply obtain confidential details from competitors. To deal with the issue, the hackathon teams have turned to technologies such as data anonymisation and homomorphic encryption to let them exchange data with each other more freely. The AI models can find connections that may pass a threshold for further investigation by law enforcement, who can then use writs and subpoenas to collect the ownership information they need.
The network analysis used for ownership data does not require high-performance computing for the most part. It’s a different story when it comes to looking at the trading data collected under the EU’s Markets in Financial Instruments Directive (MiFID) II as regulators try to uncover the reasons behind trading irregularities. Tens of millions of transactions pass through the FCA’s market-data processor each day, putting much greater strain on the hardware.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.