Boston City Council votes to ban facial-recognition tech
Boston City Council has voted unanimously to prevent the use of facial recognition by public agencies, including the city's police force, citing “racial bias in facial surveillance”.
The new law makes it illegal for city officials to “obtain, retain, possess, access or use” any type of facial-recognition technology. Boston's move follows similar action taken by neighbouring cities in Massachusetts (Springfield, Cambridge, Northampton, Brookline and Somerville), while other cities in the US have also implemented bans, e.g. San Francisco and Oakland.
The council's decision noted that facial recognition technology is “less accurate for African American and [Asian American and Pacific Islander] faces” and that “racial bias in facial surveillance has the potential to harm communities of colour who are already facing increased levels of surveillance and harassment”.
Council member Michelle Wu said: “Boston should not use racially discriminatory technology that threatens the privacy and basic rights of our residents”.
Liz Breadon, another Boston city councillor speaking at the council meeting, added: “I think that there’s a good reason to ban this technology right now - because it’s unreliable - and moving forward, we have to also consider whether just because something is possible, that it’s the right thing to do.”
“Surveilling our population at large and doing facial identification is not necessarily the way we want to go in a free society.”
However, the new law still allows Boston police and officials to pursue leads and tips that have been supplied by other law enforcement agencies and which may have been generated through the use of facial-recognition technology. Boston police are thus still able to act on information acquired as a result of the use of facial recognition.
Boston’s decision was guided by the American Civil Liberties Union (ACLU), whose “Press Pause On Face Surveillance” campaign is focused on passing city-wide restrictions on the use of facial recognition technology across the US.
In a statement, Carol Rose, executive director, ACLU of Massachusetts, said: “To effectively address police abuses and systemic racism, we must address the tools that exacerbate those long-standing crises. Face surveillance supercharges the policing of black and brown communities and tramples on everyone’s rights to anonymity and privacy.”
Speaking to a local radio station, Kade Crockford, also of the ACLU, said: “Let's ensure that we put the policy horse before the technology cart and lead with our values so we don't accidentally wake up someday in a dystopian surveillance state.”
“Behind the scenes, police departments and technology companies have created an architecture of oppression that is very difficult to dismantle.”
Earlier this week, the Santa Cruz, California, city council voted unanimously to end the use of facial recognition technology by the city of Oakland, along with any use of predictive policing.
Boston’s announcement follows the news of a facial recognition failure which occurred in Detroit, Michigan, earlier this year, when Robert Williams, a black man, was mistakenly identified by a facial-recognition system used by local police. Williams was arrested at his home, in front of his two young daughters, and spent 30 hours in jail. The charges were dropped when the police finally realised they’d made a mistake and arrested the wrong man.
Facial-recognition technology has come in for unrelenting criticism in recent months.
Amazon’s cloud-based Rekognition platform is widely used by US police forces, including those of Florida and Oregon, to identify persons of interest, as well as being used by private organisations. However, Rekognition has been repeatedly shown to demonstrate bias and frequently exhibits a wide margin of error in correct matches, especially with dark-skinned faces. Earlier this month, Amazon placed a year-long moratorium on police use of its facial-recognition technology, stating that it hopes that this will give lawmakers time to introduce regulations for ethical use of the technology.
Amazon's announcement came one day after IBM confirmed that it was exiting the facial-recognition business, stating in a letter to the US Congress that it would end its development of “general purpose” facial-recognition software, instead supporting a dialogue about the ethics of the use of the technology by domestic law enforcement.
In the US, many commercial facial recognition systems misidentify people of colour more often than white people, according to a government study, adding fuel to the scepticism over the technology widely used by law enforcement agencies. The technology also struggles to identify transgender people.
In the UK, a Cardiff man who brought one of the world’s first court challenges over police use of facial-recognition technology, has taken his fight to the Court of Appeal, after the original case was defeated in September last year. 37-year-old Ed Bridges brought the legal action at the High Court last year after claiming his face was scanned while doing Christmas shopping in 2017 and also whilst attending a peaceful anti-arms protest in 2018.
South Wales Police has hit the headlines over its use of facial recognition more than once, repeatedly using the technology to monitor football fans attending matches at the Cardiff City Stadium.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.