AI to tackle racism

Can AI be used to tackle racism?

Image credit: Getty Images

Artificial intelligence, used within the education sector and in the hiring process, prompts questions about its potential to do more good than harm.

Imagine if technology could warn teachers if a child was about to 'kick off'. If an intelligent system could flag to staff that a combination of factors – a family row the night before, a missed breakfast, conflict with peers at school – meant that this child might need a different approach in class.

“The beautiful thing about being human is that we are patterned beings,” says data scientist Mike Bugembe. There is regularity to why we behave as we do.

These algorithms to aid teachers don’t yet exist, but one day they could, he hopes – there are so many data points to measure, from a child’s heartrate over 24 hours, blood sugar level, socio-economic status, that could create a comprehensive picture.

This is important, because when education goes wrong for a young child, it can have a lifelong impact, says Bugembe.

It’s more likely to go wrong for young black children who suffer disproportionately at school compared to their white peers. This doesn’t mean teachers are overtly racist, but research from the US reveals that black children are more likely to be excluded at preschool, and unfairly singled out as trouble makers. Young black Caribbean boys are nearly four times as likely to be permanently excluded from school, according to a report from the UK’s Institute of Race Relations.

“If the teacher has this information, they can treat the child a little differently and prevent him from throwing the chair, so he does not get expelled and end up in a vicious spiral,” says Bugembe. “When a child is unable to return to school, his propensity to end up in the criminal system goes through the roof.”

This project is yet to get off the ground, hampered by issues of confidentiality, not to mention lack of investment. But there’s no doubt education has a problem. Teaching workforces are overwhelmingly white, notes race think tank the Runnymede Trust in its 2020 report into race and racism in English secondary schools. All teachers, it says, need training to improve their racial literacy, and elements of the curriculum and school policies which risk entrenching the status quo need an overhaul.

Teachers’ perceptions are seen as the greatest barriers to success in education for young black Britons, according to a survey by the YMCA which also showed that 95 per cent of young black people have witnessed racist language in education with more than half of all young males saying they hear it “all the time”.

Could artificial intelligence help combat prejudice in education and beyond, as Bugembe hopes? To date, AI has been the villain of the piece, tainted by prejudices baked into the data that informs it. Behind a neutral front, it has been notorious in amplifying discrimination. Recent history is littered with examples, such as predictive policing in the US which has been shown to lead police to target black neighbourhoods – in turn providing more crime data to create a biased feedback loop. Facial-recognition technology hasn’t been able to identify people of colour accurately, and has falsely identified innocent people as criminals.

Bugembe is aware of the shortcomings of smart technology. As chief data officer at JustGiving, he developed an algorithm that boosted fundraising campaigns and raised more than £20m a year for good causes. He found that fundraisers using the website were advised to include photos of white men and bikes, for typically these were the type of campaigns run by well-connected professionals that raised thousands of pounds from affluent colleagues.

“A typical AI system isn’t capable of asking why bias occurs, unless it’s designed to do so,” says Sadia Afroz, staff scientist at Avast. “One of the crucial limits of AI in detecting racism is human bias and the lack of standard testing infrastructure to discover it.” If you test it on biased data, it’s difficult to discover bias, she says.

Do we really want to automate complex decisions about children’s education and futures, asks neuroscientist Dr Lasana Harris? Based in the Department of Experimental Psychology at University College London, he looks at how society conditions us to become prejudiced and he’s researching how we perceive AI in a social context. “We’re early days – we haven’t yet understood how we could use AI for social problems.”

The vast power of computing could be harnessed to reveal historic and institutional bias, he says. “Right now, that’s where its promise lies. As an alternative to the ‘bad apple’ approach of identifying racist individuals. Systemic racism is much harder to detect – you need a tool to figure out where that’s occurring.” AI, he says, could in theory be effective at flagging if black and ethnic pupils are consistently awarded lower marks for instance. “It’s what you do with this information that will be the human problem. AI can’t make ethical or moral decisions – a human has to.”

Rose Luckin, professor of learner-centred design at UCL Knowledge Lab, agrees – any use of AI in schools requires kid gloves, but it could draw attention to discriminatory decisions and behaviour and unwitting bias. “I’m not having a pop at teachers,” she says. “A lot of people aren’t aware they are biased, so pointing that out in a sensitive way can be helpful.” Technology could also be designed to predict situations where racial bias might occur – but this could be followed up with support and training delivered by a human being, she says. The entire process must be transparent, and the AI well designed and rigorously evaluated to ensure no biases creep in.

“If you are predicting from past behaviour where people are likely to be biased in their views, then it must be done ethically,” she says. “Whoever engages with the process needs to have signed up.”  

The real difference AI can make in the shorter term within education is in personalised learning – customising how and at what rate pupils learn – and it’s here technology is already beginning to alleviate teachers’ workload. Educational technology can learn about each student’s strengths and weaknesses and tailor-make paths to progress – with teachers on hand to intervene. “It has huge potential,” says Luckin. “It should be able to adapt in an unbiased way – providing it’s been designed correctly.”

It’s a question of resources too, says Luckin. “It’s hard for teachers to engage in thinking about AI when their days are so full.” Because of issues around young people and consent, these tools might be better used eventually at universities and further education to keep students engaged and personalise their experience. In 2019, Staffordshire University became the first UK university to introduce an AI-powered chatbot to give tailored nudges and guidance to individual students.

Could AI have more scope beyond education to tackle bias in our adult lives and help level the playing field? Amid growing awareness of racial unfairness, companies have scrambled to look at diversity among their own workforces since last summer – and many sectors have been found wanting.

Former orthopaedic surgeon Dr Alex Young believes technology is part of the puzzle. “AI has the potential to play an incredibly powerful role in eliminating racial bias in the hiring process.”

He’s left clinical medicine to develop immersive technology to train staff to be better doctors, recruiters, and colleagues. His Bristol-based company Virti has incorporated AI within a virtual-reality training platform, which places staff within environments – via headsets – where they learn by experience. “Unconscious biases are deeply ingrained,” he says. “Simply undergoing ‘awareness training’ isn’t enough to eliminate them.”  

While his tech initially was designed with medical training in mind, it works well with staff in charge of recruitment – an area fraught with risks of bias and overt discrimination. Trainees don a headset – or watch via desktop or mobile – and will be thrust into a virtual scenario where they might have to interview potential virtual candidates for a post, for instance. Interestingly, this tech can also be used to spot problems of sexual harassment.

All the while, an AI analyses their responses to different individuals, incorporating data from eye tracking and language analytics. So it will pick up if an interviewer doesn’t focus long enough on a black or ethnic candidate, for instance. “In a virtual environment, AI can be used to pick up a lot of data,” says Young. “We can analyse subtle cadences in how they speak to someone they like or don’t like, and see how virtual candidates are scored.”

Virti’s system provides tailored feedback and will flag if individuals display biases and need more training.

This experiential training is more effective and long-lasting because it produces an emotional response, says Young. “We can personalise content so they can see the repercussions of their actions from the candidate’s viewpoint.”

In recent years, companies have turned to ‘blind’ recruitment platforms to eliminate prejudice from the hiring process. These ask a candidate to complete online applications and work exercises – without employers knowing anything about their ethnicity, gender, or background.

Removing these details matters, says neuroscientist Riham Satti. In the few seconds that a recruiter typically spends looking at applicants’ CVs, eye-tracking technology shows they focus on candidates’ names, where and what they studied. “But a name tells you nothing about a person,” she says. It’s here that prejudices can kick in – partly because we take mental shortcuts when overloaded with information. There are in fact some 140 types of cognitive bias including the much-discussed unconscious bias – and our brain functions make it all-but impossible to override them. “It’s not anybody’s fault,” she says. “It’s how our brains are wired.” Unconscious bias training does help raise awareness around issues such as structural racism, but it doesn’t overcome the problem.

Using computational linguistics, Satti has co-developed a programme that scans and removes any giveaway details on a candidate’s cover letter and CV, including ethnicity, gender, school, among others, so recruiters will be left looking solely at an individual’s skills and the content – rather than title and provenance – of their degree for instance. This has had substantial impact among her clients here in the UK and abroad, with diversity and inclusion improving by some 30 per cent in companies deploying the technology. This spring, her company MeVitae will launch an algorithm – meticulously trained on non-biased data – which aims to help companies compile detailed shortlists.

Historically it’s been difficult to pinpoint where and how structural racism occurs – it’s as hard to spot as it can be to explain to someone who’s never experienced it. But it has been around for years, says Afroz. “Human decisions and motivations are difficult to measure as humans can lie and make choices due to implicit biases unknown to them – whereas algorithms are truthful and transparent.”

The likes of IBM and Microsoft are investing heavily in removing blind spots and what data scientist Cathy O’Neill in her book ‘Weapons of Math Destruction’ calls “opinions embedded in mathematics”.

We need to hurry, says Bugembe. “My big concern is AI is only accelerating.” In 2014, experts predicted it would take at least a decade for an AI to beat a human at the ancient Chinese board game ‘Go’ – it took two years. “It’s growing at an exponential rate,” he adds, and the pandemic has accelerated its adoption. “Without policing and standards, all it’s doing is perpetuating bias against under-represented people.” It’s harder to retrofit algorithms than get them right in the first place. “It’s a social problem – it’s hard to interest investors. I’m seeing a lot of talk. But we haven’t seen the money yet.”


Explainable AI

Dr Caroline Chibelushi, who’s responsible for accelerating the safe adoption of ethical AI at KTN (which partners with government agency Innovate UK), discusses ‘explainable AI’.

“AI tools can reduce humans’ subjective interpretation of data, because machine learning algorithms learn to consider only the variables that improve the accuracy of their predictions (based on training data that’s used). There’s evidence to show algorithms can improve decision-making – unlike human decisions, those made by AI could be opened up, examined and interrogated.

“AI should be developed like any other machine. Take an industrial baking machine making mince pies – if the pies come out flat, the operator can stop the batch and investigate. A baker understands the importance of each ingredient. And so users of AI should have some understanding of how the tools work, what is expected, and an ability to question the results.

“If an AI tool developed to predict A-level results generates 90 per cent of the lower grades to black children from poor areas, then the AI user should question the results and understand the importance and relevance of each feature. In this way the tools will be transparent, understandable and interpretable – what I call ‘explainable AI’.

“Explanations supporting the output of a model are crucial. They will help users to understand their responsibility in using AI tools.”


Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles