Welcome Your IET account
Close up portrait of attractive young African woman showing conceptual face recognition safety scan.

Bias study hopes to inspire development of fairer facial recognition technology

Image credit: Karel Noppe/Dreamstime

A study looking into the accuracy and bias of gender and skin colour in automatic face recognition algorithms tested with real-world data has found that some demographics show higher false positive or false negative rates.

Facial recognition has been routinely used by both private and governmental organisations worldwide. Automatic face recognition can be used for legitimate and beneficial purposes (for example, to improve security) but at the same time, its power and commonplace use increases the potential negative impact that unfair methods can have on society (for example, discrimination against ethnic minorities). Therefore, a necessary, albeit not sufficient, condition for a legitimate deployment of face recognition algorithms is equal accuracy for all demographic groups. 

With this purpose in mind, researchers from the Human Pose Recovery and Behaviour Analysis Group at the Computer Vision Centre (CVC) in Spain and the University of Barcelona (UB), organised a challenge within the European Conference of Computer Vision (ECCV) 2020. The results evaluated the accuracy of the submitted algorithms by the participants on the face verification task in the presence of other confounding attributes.

The challenge was a success, since “it attracted 151 participants, who made more than 1,800 submissions in total, exceeding our expectations regarding the number of participants and submissions,” explained Sergio Escalera of UB, who led the study. 

As part of the study, participants used a not-balanced image dataset, which simulated a real-world scenario where AI-based models are supposed to be trained and evaluated on imbalanced data (considerably more white males than dark females). In total, participants worked with 152,917 images from 6,139 identities.

These images were then annotated for two protected attributes: gender and skin colour; and five legitimate attributes: age group (0-34, 35-64, 65+), head pose (frontal, other), image source (still image, video frame), wearing glasses and a bounding box size.

The researchers found that top winning solutions exceeded 99.9 per cent accuracy while achieving very low scores in the proposed bias metrics. Julio C S Jacques Jr, a researcher at the CVC and at the Open University of Catalonia, said that such results: “can be considered a step toward the development of fairer face recognition methods.”

An analysis of the top 10 teams showed higher false-positive rates [matches] for females with darker skin tones and for samples where both individuals wear glasses. In contrast, there were higher false-negative rates for males with light skin tone and for samples where both individuals were aged 35 and below.

Furthermore, the study found that in the dataset individuals younger than 35 wear glasses less often than older individuals, resulting in a combination of effects of these attributes.

“This was not a surprise, since the adopted dataset was not balanced regarding the different demographic attributes. However, it shows that overall accuracy is not enough when the goal is to build fair face recognition methods, and that future works on the topic must take into account accuracy and bias mitigation together,” concluded Jacques Jr.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them