Two policeman outside Westminster

Regulate AI in policing as ‘matter of urgency’, warns report

Image credit: Dreamstime

A new think tank report has called for the introduction of regulations ensuring transparency and responsibility when UK police forces trial machine learning (ML) tools.

The report, which was written by researchers from the Royal United Services Institute (RUSI) and the University of Winchester’s Centre for Information Rights, lays out the current state of ML algorithm use in UK law enforcement, and lays out a possible framework for appropriate use.

According to the authors, ensuring that ML tools are not misused in law enforcement is urgent and the government should be considering guidelines for their testing and deployment.

“This should be addressed as a matter of urgency to enable police forces to trial new technologies in accordance with data protection legislation, respect for human rights and administrative law principles,” the report says.

With UK police forces under growing pressure from an increasingly diverse workload, it is natural to consider adopting technologies to help police forces prioritise the areas and individuals most likely to need attention. Initial testing has demonstrated that ML tools may be more effective than standard human-led policing methods, in some cases proving twice as accurate.

ML algorithms are already being used for limited policing purposes in the UK. In May 2017, it was reported that Durham Constabulary would become the first police force to implement a ML tool – the Harm Assessment Risk Tool (HART) – which supports custody decision making. HART uses a form of supervised ML to classify the risk of individuals reoffending with violent and non-violent offences, based on 29 variables relating to criminal history and five relating to background characteristics. It was developed in partnership with University of Cambridge academics using data from 104,000 custody events over five years.

Norfolk Constabulary has used a ML algorithm (also developed in partnership with Cambridge researchers) to assess which burglary cases are most solvable to help decide which should be referred for further investigation, while Kent Police has trialled a commercial ML product (PredPol) to predict where and when crimes will take place.

“We anticipate that the use of ML algorithms for policing will increase in the coming years and be applied to a wider range of decision-making functions,” the RUSI authors told E&T via email. According to the authors, it is concerning that there are no guidelines for the trail and deployment of these technologies in the real world.

The need to ensure ML algorithms are used fairly is particularly salient given recent controversies over human biases being replicated and reinforced by ML systems; for instance, when facial recognition technology was demonstrated to perform poorly at identifying dark-skinned women. In law enforcement, these biases threaten to aggravate the overpolicing of minority communities and individuals.

ML tools such as HART do not use race as a variable, but can still disproportionately target disadvantaged groups using postcode as a proxy for race and social class. In May, Amnesty International published a report damning the Metropolitan Police’s use of the ML-based ‘Gangs Matrix’ (which identifies individuals at risk of engaging in gang violence) as “racially biased” and “unfit for purpose”. Meanwhile, a 2016 ProPublica investigation of ML use in US law enforcement found that just one out of five of the individuals identified as likely to commit a violent crime did so, while black defendants were twice as likely to be deemed high-risk than white defendants.

“If particular minorities have been disproportionately targeted by police action in the past, the algorithm may incorrectly assess those individuals as posing an increased risk in the future,” the researchers commented. “Acting on the predictions may then cause those individuals to again be disproportionately targeted by police action, creating a feedback loop whereby the predicted outcome simply becomes a self-fulfilling prophecy.”

“In reality, they may be no more likely to offend, just more likely to be arrested […] if the input data is biased, the algorithm may replicate and in some cases amplify the existing biases inherent in the dataset.”

While some complications relating to the use of ML in law enforcement are already known, the authors argue that there may be further practical, legal and ethical difficulties which emerge as the technology is trialled in the real world.

“It is clear that new technologies must be trialled in a controlled way in order to assess their effectiveness, before being rolled out in an operational environment where they are likely to have a significant impact on citizens’ lives. However, there is currently no clear framework in place for how the police should conduct such trials,” the report says. “What is needed going forward are clear codes of practice to enable police forces to trial new algorithmic tools in a controlled way in order to establish whether or not a certain tool is likely to improve effectiveness of a certain policing function.”

“It is essential that such experimental innovation is conducted within the bounds of a clear policy framework and that there are sufficient regulatory and oversight mechanisms in place to ensure fair and legal use of technologies within a live policing environment.”

While existing data protection, human rights and administrative legislation places constraints on how police forces can store and process personal data, the authors are concerned that there is no clear code of practice for testing and implementation of ML algorithms.

“The lack of clear regulatory and oversight mechanisms is concerning,” the authors commented.

RUSI and the University of Winchester have recommended that a code of practice is developed for police forces trialling predictive policing and decision making tools, including how ML predictions are presented to the subject of the predictions, and laying out a standard process for resolving decisions when professional and algorithmic judgements differ. These trials should be limited and followed with independent evaluation, the report says, and the role of the official police inspectorate (HMICFRS) should include assessing compliance with these guidelines.

Only once these tools have been shown to work effectively in localised trials should they be deployed more widely, the authors wrote. Deployment should also be conditional on the approval of the public, local ethics boards, and an expert national working group, which would decide on fair requirements for the tools, such as appropriate selection of training data.

“It is crucial to engage the public in deciding how ML tools are used for law enforcement purposes, and also to ensure input from a range of experts from fields such as computer science, law and ethics. The recent controversy raised by the trials of facial recognition technology by the Metropolitan Police illustrates the challenges raised by experimentation and innovation in live policing environments,” the RUSI authors told E&T.

“It is important that police forces continue to make clear to the public what personal data is collected by police forces and the purposes for which it is used. “

Meanwhile, police officers will likely need to learn new skills to understand and use the ML tools. This reskilling would not just include gaining new technical skills, but also understanding the inherent bias in the systems. As artificial intelligence becomes an integral part of law enforcement, the College of Policing may begin to offer a course on the subject, while the police force should make an effort to recruit technical experts, the authors say.

“Developing and maintaining proprietary systems requires that there are individuals with the relevant skills and expertise working within the public authority. Efforts should be made to create attractive job opportunities within policing and criminal justice for software developers with the skills needed to develop proprietary algorithmic tools.”

The report also addresses concerns regarding the ‘black box’ nature of ML algorithms: that in many cases, it is not understood how they reach their decisions once they have been trained. The think tank has recommended laying out standards of transparency for the algorithms (including for commercial software) such that if used in criminal justice decision making, human experts are able to determine with confidence which factors were most influential in the prediction.

“As law enforcement agencies make increased use of (semi-) automated analysis tools underpinned by machine learning, this lack of transparency will present significant legal challenges,” the authors said. “In order for an individual to assess whether they have been subject to unfair decision-making in the criminal justice system, they must be able to review and scrutinise the decision-making process that led to certain actions being taken.”

“One can conceive of a situation in which an individual claims that they have been treated unfairly in the criminal justice system and the role played by the algorithmic prediction in the decision-making process comes under scrutiny.”

The authors concede that it may be impossible to remove all forms of bias from these tools, in which case they will require constant “attention and vigilance” in order to ensure that decision-making which accounts for these predictions are as accurate and unbiased as possible.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles