San Francisco officials reject use of facial-recognition technology
Image credit: Dreamstime
San Francisco officials have agreed to ban the acquisition of facial-recognition technology for use by city personnel.
Amid increasing tension and concerns around privacy and the growing capabilities enabled by the rise of cloud computing, San Francisco city officials backed plans by a majority of 8-1 to ban the purchase and use of facial-recognition technology by city personnel, moving instead towards regulating it.
San Francisco is not alone in clamping down on technology which was found to potentially disadvantage certain groups, such as in the context of law enforcement. Now, the city finds itself at the forefront of increasing discontent in the US over the implications that facial recognition could have for its 880,000 citizens.
Across the US, facial-recognition tools in police services have become more mainstream in recent years. Including federal, state and local law enforcement, the government facial biometric market is expected to swell to $375m in 2025, up from $136.9m in 2018.
In the context of US law enforcement, the demand for the technology has prospered. A report from the Center on Privacy and Technology at Georgetown Law points out that at least five major police departments - including agencies in Chicago, Dallas and Los Angeles - either claimed to run real-time face recognition from street cameras, had bought technology that can do so, or expressed an interest in buying same.
The San Francisco officials’ decision has attracted criticism that it could stall the city’s progress and success. San Francisco’s “ban on facial recognition will make it frozen in time with outdated technology,” said Daniel Castro, vice president of the Information Technology and Innovation Foundation.
That the US government might consider using face identification for mass surveillance, in the way that China has, is as yet unknown.
A recent report of the biometrics and forensics ethics group on facial recognition commented on the ethical issues raised by the use of live or real-time face-recognition technology for policing purposes and concluded that questions would remain regarding the accuracy of live facial-recognition technology, its potential for biased outputs and biased decision-making on the part of system operators and an ambiguity about the nature of current deployments.
San Francisco’s ordinance, which would also require city departments to submit surveillance technology policies for public vetting, could become final after a second vote next week by the same officials, the city’s Board of Supervisors.
Aaron Peskin, the supervisor who emerged as the leading advocate in challenging the technology industry with the ban, said that there is a fundamental duty to safeguarding the public from potential abuses, especially to protect “marginalised groups” that could be harmed by the technology and that the ordinance was not an “anti-technology policy”.
Other surveillance techniques such as security cameras would still be kept in use. There is also an option that the district attorney or sheriff can make an appeal to use certain restricted technology in exceptional circumstances.
In the UK, there are examples of why police forces may struggle with opposition. According to the Big Brother Watch report, the Metropolitan Police - the UK’s largest police force - admitted that 102 people were wrongly identified by their system, which stores biometric photos for 30 days. Officers operating the force’s facial-recognition software at Notting Hill Carnival in 2017 said that they staged interventions with around five innocent people incorrectly matched on one day alone, asking the innocent festival-goers to prove their identity.
In the United States, the use of automated facial recognition at large-scale events often lies within the remit of commercial actors. Unlike in the UK, stadiums and other outdoor spaces are often privately owned in the US. It can lead to examples where the technology is used on visitors without their knowledge: in March 2018, it was revealed that the operators of Madison Square Garden have done exactly this.
Outside of the US, the use of facial recognition has grown more pervasive. According to a 2018 Big Brother Watch report, Russia has approximately 160,000 cameras that follow its citizens around the clock. In China, there are 170 million CCTV cameras installed, many recently upgraded with automated facial-recognition software, enabling the identification of millions of citizens within a second by the cameras, the report states.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.