Welcome Your IET account
Facial recognition scanning concept
Comment

AI bias is down to people, not technology

Image credit: Monkey Business Images/Dreamstime

Rather than blaming artificial intelligence when it appears to be guilty of prejudice, we need to look deeper for the human causes.

Negative headlines surrounding AI are hardly new. But while the allegations of robots stealing our jobs have been widely discredited, stories around AI bias have shone a light on a pervasive issue, and on human prejudices.

When implemented properly, AI can fuel huge efficiencies and unlock significant improvements in customer experience and engagement. For marketers, it can be a particularly valuable tool. AI can be used to leverage data on customers’ predicted browsing and buying behaviour and their demographic sector, and these insights can then be used to make informed decisions about when and how people are likely to buy.

Through having these insights, brands can ensure they’re delivering the right message to the right person at the right time. Shoppers enjoy a seamless customer experience and personalised recommendations, while the brand benefits from increased engagement and ultimately sales. But as the headlines prove, AI is not without its risks. And all too often, companies implementing AI overlook the critical role humans play in shaping the algorithms, leading to disastrous results.

Last year, for example, Twitter was tarnished with allegations of racism after its picture-cropping algorithms sometimes ‘preferred’ white faces over black faces, making facial recognition for people with darker skin tones much harder. In 2018, Amazon was accused of sexism after an AI tool it was using to sort CVs learned to prefer male candidates over female candidates. Negative publicity like this leads to damage to brand equity, which inevitably takes a long time to repair, making it critical that companies remove bias from AI. However, AI is a tool and as such is only as biased as the data sets on which it relies. To stop prejudiced algorithms, we must start to evaluate our own bias.

While bad AI makes for attention-grabbing headlines, it is not inherently good or bad, but rather, like any tool, it can be used for good or bad. A knife, for example, can be used to chop salad or attack someone – and this is ultimately influenced by the person behind it. In the same way, AI tools are trained on data, and the way this is collected, ingested, analysed and used will ultimately impact results. If AI is trained on biased data sets, it is likely the outcomes will also be biased. Taking the Twitter example, if a deep-learning algorithm processes more photos of light-skinned faces, the result will ultimately be that the facial-recognition system will recognise those light-skinned faces more easily than those with darker skin.

To tackle AI bias, it is critical that development teams are educated about transparent means of data collection to ensure they’re not feeding the AI with biased data sets. To achieve this, businesses must aggregate diverse data to get as full a picture as possible, and remember that data can be biased not only by what’s included, but what’s excluded. Oversampling of certain cohorts, for example, can distort results and fuel bias.

Having the right underlying infrastructure is critical to address this. And this infrastructure must enable in-depth analysis of data, and transparent and open means of data collection. Data platforms must link siloed databases together to create a wholistic view of individuals. These can then be analysed easily and used to support marketing strategies, and engage prospects and customers.

It is also important that businesses avoid using any kind of demographic pattern when developing algorithms. There have been past instances of AI using gender, or the prominent ethnicity in a designated area, to influence the price or contract offered to a prospective client, leading to unethical price discrimination. And when demographic data is used to make decisions about providing financial services to an individual, or deciding on insurance payments the outcome can be particularly harmful.

Despite the maligned depiction of AI, when implemented properly it can be instrumental at driving engagement, enhancing efficiencies and increasing conversions. With the amount of data in the world increasing exponentially, AI will also be essential to help businesses unlock valuable data insights at scale. But to realise AI’s promise, we need to ensure it is properly implemented and properly educated. Rather than blaming AI for bias, we must look deeper at the human cause.

Omer Artun is chief science officer with Acquia.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them