
Facial recognition technology and biometric IDs: the face of future surveillance
Image credit: Digital Barriers, Dreyer, Dreamstime
A lack of clarity surrounding the legal aspects of biometric identification is holding back efforts to implement facial recognition technology.
“Meeting eye to lens, the feeling of intrusion is unlike the ubiquitous CCTV we are usually subjected to – you know you are being measured, assessed, identified, invaded.”
That is what Silkie Carlo, human rights group Liberty’s technology policy officer, said about her experience of peering into the glassy artificial eye of an automated facial-recognition camera that was being trialled by the Metropolitan Police at the Notting Hill Carnival in London earlier this year.
At the same event, police deployed 140 ‘super recognisers’. These are officers with exceptionally high perception and memory skills, and they had been tasked with scanning crowds in search of the faces of known troublemakers.
This detail passed without comment from Carlo in a 1300-word blog post she wrote protesting against the Met’s use of biometric technology, which she denounced as crude, wasteful and potentially racist.
She also did not mention the reams of footage of the annual street party that was being constantly recorded by spectators via smartphones and instantly uploaded to the internet, where it could be viewed by all – including the police.
These omissions are relevant because they show how new technology can spark particular kinds of fears, rational or otherwise, when used to help maintain law and order. Dystopian fiction – from ‘Nineteen Eighty-Four’ to ‘Minority Report’ – plays on these fears.
“It is the stuff of dystopian literature for a reason,” Carlo wrote in her blog post. “In a society that has rejected ID cards, the prospect of biometric checkpoints overshadowing our public spaces is plainly unacceptable and frankly frightening.”
Trust in governments was hit by Edward Snowden’s revelations about the extent of surveillance in many Western countries, and offline as well as online anonymity is increasingly being sought by some concerned citizens. However, is biometric surveillance really so frightening in liberal, democratic societies where people are afforded legal and constitutional protections, and where daily life is quite different from that in totalitarian dictatorships?
“I think people worry about Big Brother far more than necessary,” says Dr Josh Davis, an expert in super recognisers who has also carried out research into identification of unfamiliar faces via CCTV. “They’ve seen facial recognition in science fiction or ‘Spooks’ or something like that and they probably think it is better than it really is.
“It’s right for people to be concerned that there may be a risk to their privacy or normal rights from computerised systems, I just don’t think at the present time those systems are sufficiently good to cause those sorts of worries.”
Regardless of whether the technology is any good, the clamouring of campaigners seems to have had an effect. Two weeks after a media storm over the Met’s use of facial biometrics it emerged that some other UK police forces had pulled back – for now, at least – from trialling some facial-recognition software in their jurisdictions, apparently for fear of an adverse public reaction.
Charlie Hedges, a former senior National Crime Agency officer who now runs a security consultancy, says trials meant to have taken place in several large shopping centres in conjunction with technology firm Facewatch had been scrapped at the eleventh hour. “The police initially were OK with it and then, just as we were about to make it go live, they pulled out. It’s so frustrating,” he told E&T.
“If there is a high-risk missing child, particularly in a shopping-centre scenario – a Jamie Bulger type of thing [a toddler kidnapped and murdered in 1993] – because in shopping centres you have such a huge number of CCTV cameras everywhere, the development of increasingly sophisticated facial-recognition technology means that if you have those extremely vulnerable missing children on a watch list that is linked to facial-recognition enabled cameras you’ve got a chance of being able to identify them more quickly.”
UK trials of other facial-recognition products have also been slow to start. British security technology company Digital Barriers has not yet had its SmartVis software platform tested in real-world law-enforcement scenarios in the UK, despite offering it free of charge to police in cases involving missing young people.
Another company, NEC, did manage to have its facial-recognition service, NeoFace, road-tested by Leicestershire Police three years ago and by South Wales Police at the football Champions League final in Cardiff earlier this year, but there have been no UK trials of it announced since.
Another prime mover in the field of facial recognition, SeeQuestor – which works with the Met and British Transport Police – did not reply to an inquiry from E&T seeking information about any trials of its products.
Facial-recognition security software typically scans CCTV footage looking for matches with images on watch lists. It has been described as like running a constant Google-style search on a series of faces through CCTV footage in real time.
Privately some security insiders speak of reticence among police leaders about embracing the technology, saying a regulatory gap left because there is no official rule book around use of facial biometrics has made chief constables wary. They also point to the weaknesses of CCTV images or facial-recognition evidence when used in court prosecutions, and the potential for successful legal challenges against convictions reliant on it.
The UK nevertheless remains one of the most surveilled countries in the world. The Police National Database now holds 19 million facial images, many of people who were arrested but never charged or convicted. Earlier this year Mike Barton, the chief constable of Durham Constabulary, said his officers were compiling image databases of “villains” using footage from body-worn video cameras and were using this to study gait and mannerisms as well as facial features.
While videos are usually deleted after a month unless they are needed for prosecutions, they are reportedly retained for longer in Durham in the case of suspects with previous convictions.
The Biometrics Commissioner’s recent assessment that images on police databases are being used in a way that goes “far beyond” custody purposes has given some in law enforcement pause for thought. Several years ago MPs on the Science and Technology Committee made the same point, but the lackadaisical attitude shows no sign of abating.
The government was supposed to have published a biometrics strategy three years ago, but this has been serially delayed. The Home Office promises only that the strategy will be released “in due course”. Meanwhile, the Information Commissioner’s Office and the forensics watchdog are understood to be probing the ramifications for the police of new data-protection laws.
Legally, DNA and fingerprint records cannot normally be retained for longer than six months in cases where no criminal charges are brought. However, there are no such safeguards around the retention of facial biometrics.
“Once you start using an unconvicted photograph for anything other than intelligence use, it’s likely illegal,” opines David Videcette, a former senior Scotland Yard detective who played a major role in the investigations into the July 2007 London Tube bombings.
The regulatory vacuum should, he says, be “sorted out” via a change in the law and a retrospective weeding of photographs of innocent people. He is also uncomfortable about situations where “outside contractors” – technology companies and private security staff at shopping centres, for example – might be allowed to leverage biometric data that police should arguably not have on file in the first place.
“Internally within the police, in counter-terrorism, we can look at anything,” he says. “For borough-based officers dealing with high volume crime – car thefts, minor assaults, perhaps even a GBH [grievous bodily harm], it’s ‘No you’re not allowed to look at this database’ and ‘No you’re not allowed to look at that database’. You’re not allowed to know what’s there.
“We’ve got massive databases, huge databases, which we use for intelligence purposes only, but we can’t justify putting them in other people’s databases or going public with some of this stuff.
“Even around things like automatic number plate recognition, we’ve got an incredible amount of data which means you can virtually track a car from one point to another. We have these databases which will tell us exactly which car went where. Internally we say you can only access this database in the most extreme circumstances, and really it’s only counter-terrorism. Even with murder, up to a point, we would say no.”
Lord Harris of Haringey, a Labour peer who carried out a review for London Mayor Sadiq Khan into the city’s preparedness for terror attacks, sees an irony in the fact that police surveillance of the public realm remains so controversial but smartphones and drone cameras have meant ordinary people can record so extensively in public, and sometimes private, spaces.
He also points out that Facebook allows users access to facial-recognition technology that can group photographs based on which people have been automatically tagged in them. Because of different data-protection laws the service, called Moments, exists in different versions in the US and UK, though facial recognition is already used at passport control at many UK airports.
“This technology is advancing so rapidly that, in a sense, it’s a bit silly for the police not to be making use of it,” says Lord Harris. “Is this any different from the fact that you might have a briefing for 60 police officers showing them a picture of a suspect and telling them to go scan the crowd and if they see them, to go and arrest him or her? They’ve been using human spotters of people for a long time and I’m not sure what the difference really is apart from scale and numbers.”
John Kennedy, head of digital forensics for the company Key Forensic Services and a pioneer in police use of video, argues that the extent of surveillance is less important than the ability to analyse and store images well – skills lacking in the UK.
“Historically, over the last two or three decades, the UK has led the world in surveillance, and there’s an assumption that we are extremely good at using that evidence,” he says. “But considering the amount of CCTV in the UK, we have never actually harnessed that information properly. The resources simply haven’t been available. We haven’t trained police officers to properly analyse [the images] or store them and create databases to use them for intelligence-gathering purposes.
“In China, they have actually gone about it properly. They have created proper databases which they have kept updated, adding to the metadata so that people can be flagged up on screen and you can see their name, date of birth and probably their antecedent history.
“We in the UK simply haven’t done that. We haven’t sat down and said, look, this is a huge evidential medium. How are we going to manage it? How are we going to use it to our best advantage? We’re playing catch-up.”
While privacy campaigners appear to view the UK as an Orwellian dystopia where the authorities have Stasi-style snooping powers, at the other end of the spectrum the securocrats claim they are fighting crime with one hand forever tied behind their backs. The truth is less clear-cut, though.
Technologically, the police have awesome surveillance capabilities. However, for various reasons, they are currently incapable of using them to anything like their full potential most of the time. Many images on databases have often been recompressed or have the wrong aspect ratio, so are utterly useless in practice. Police leaders either don’t know how to use the technology or are reluctant to. In many cases there is too much data, of too poor quality, and it is not managed in a way that could ever make it useful. Moreover, cases are often unlikely to be successfully brought to court.
“The counter-terrorist agencies are obviously very, very good at this sort of thing, and they have the resources,” says Kennedy. “But in your provincial forces it all depends on who the chief is and how technologically-minded they are and how much they believe in the value of CCTV as an investigative tool. There’s no joined-up approach.”
Unlike in China, where surveillance systems are centralised and all-embracing, and where the government’s desire to rigidly control people is its incentive to pour resources into biometrics, the situation in the UK has been described as shoddy, fragmented, chaotic and lacking in direction.
That may or may not come as a relief to those worried about the power of the state, but it is frustrating for people wishing to make effective use of technology to enhance public safety.
In short, it’s another typically British muddle.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.