Credit: Dreamstime

Police facial recognition matches ‘almost entirely inaccurate’, privacy group warns

Image credit: CCTV camera

The UK's privacy watchdog has held open the prospect of legal action being taken against law enforcement bodies using biometric tools, after Big Brother Watch brandishes figures showing large proportion of incorrect 'matches'.

The UK’s privacy watchdog has announced it will consider launching legal action against police forces that are known to use facial recognition software if concerns about the biometric technology are not adequately addressed.

Elizabeth Denham, the Information Commissioner, said there had been a lack of transparency about how the “particularly intrusive” software was being used. However, she acknowledged it represented “both a risk and opportunity” for public protection.

“There may be significant public safety benefits from using facial recognition technology - to enable the police to apprehend offenders and prevent crimes from occurring,” Denham said.

She spoke out after campaign group Big Brother Watch highlighted figures showing a large number of ‘false positives’ resulted from the use of facial recognition cameras by police at sporting and cultural events in recent years.

For the Metropolitan Police, 98 per cent of ‘matches’ were in fact wrong, while for South Wales Police the figure was 91 per cent.

Both forces have insisted the accuracy of the technology is improving over time. They also say safeguards are in place to prevent tangible action being taken against innocent people.

Facial recognition security software typically scans CCTV footage looking for matches with images on watch lists.

Director of Big Brother Watch Silkie Carlo said: “It is deeply disturbing and undemocratic that police are using a technology that they have no legal power for and that poses a major risk to our freedoms.”

A spokeswoman for Scotland Yard said the London force has been using facial recognition technology “to assess if it could assist police in identifying known offenders in large events, in order to protect the wider public”.

Addressing false positives, the Met stated: “We do not consider these as false positive matches because additional checks and balances are in place to confirm identification following system alerts.”

E&T last year reported on the Met’s deployment of 140 so-called super recognisers at the Notting Hill Carnival. These are officers with exceptionally high perception and memory skills who are tasked with scanning crowds in search of the faces of known troublemakers.

The move followed revelations that other UK police forces had pulled back from trialing facial recognition software in their jurisdictions for fear of adverse public reaction.

At the time, child safeguarding expert Charlie Hedges said he had managed to secure CCTV cameras and Facewatch technology on a pro bono basis and that several large shopping centres had agreed to take part in operations designed to test drive the system.

He added: “The police initially were OK with it and then, just as we were about to make it go live, they pulled out. It’s so frustrating.”

Big Brother Watch was founded by Matthew Elliott, one of the leading figures in Vote Leave and the founder of the libertarian Taxpayers’ Alliance. Its office is based in the same Westminster building as six other right-of-centre organisations, including the climate sceptic Global Warming Policy Foundation.

Big Brother Watch has not objected to the use of super recognisers. However, it believes facial recognition technology should be ditched entirely by the police, regardless of its level of accuracy.

Dr Josh Davis, an expert in super recognisers, told E&T that many people worried about facial recognition because they “probably think it is better than it really is”.

Perversely, the fact that the vast majority of people are law abiding actually heightens the chance of false positives being thrown up by machines.

Speaking to E&T earlier this year, Alexander Babuta from British defence think-tank the Royal United Services Institute, said there was too much focus on the issue of false positives and bias.

He said: “We seem to be holding the machines to a higher standard than human decision-makers. The question has to be: is the machine any worse or better than our current system?”

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close