
Integrating AI into diversity initiatives poses risks as well as opportunities
Image credit: Fizkes/Dreamstime
As businesses rush to take advantage of artificial intelligence tools, they need to recognise that human intervention may be needed to make sure unreliable data isn’t thwarting attempts to boost inclusion and diversity.
The world is changing rapidly and businesses need new strategies that enable them to evolve at the same speed. For many organisations, success will depend on making a cultural shift towards upskilling their workforce - empowering them to answer questions and make decisions swiftly. Without a diverse and inclusive workforce, however, the insights generated can remain rigid and stale or, worse yet, biased and divisive.
While every business is different, diversity and inclusion (D&I) is one factor that should be integrated into every organisation’s core not only to increase competitiveness, but also to help it react promptly, utilising more diverse points of view as it addresses the next big market challenge.
Research from McKinsey has found that the success of a business directly correlates with higher levels of diversity. Gender diversity increases the probability of above-average profitability by 25 per cent; with high ethnic diversity that figure rises to 36 per cent.
Fostering diversity from the top down and utilising as many diverse viewpoints as possible is the key for these strategies to succeed. Tools based on artificial intelligence and machine learning have a part to play too, but just as a hammer won’t pick itself up and hit a nail, AI is entirely dependent on its creator and user to perform. It has the potential to either completely redefine the way businesses work with data – delivering hyper-relevant data models and business insights – or fall into a loop of self-propagating bias.
As we attempt to redefine the world of work, the risk is that the very tools we use can be our undoing if they are operated without foresight and care. When leveraging machine learning and advanced analytic methods, we must be careful that inputs don’t bias the outcomes.
In many cases, the models that AI tools rely on are built based on historical data, and if these data include biases they can propagate into future decision making. For example, models that are used to automate the process of screening resumes have been found to be biased against women applying for jobs, while models used to assist judges in criminal sentence reviews have been found to be biased against black defendants and have been withdrawn. Both illustrate how the design of both the data to be used and the application of the resulting solution need careful human attention.
If, for example, an organisation is looking to hire a new worker, the data may recommend a white male – not necessarily because other candidates don’t have the right skills, but because the historic data would indicate that this group performed well. The reality, naturally, is that data on other workers is less readily available.
Automation and machine learning have one primary limitation: context. While automated analysis is extremely effective, it is hollow without knowing how and where to apply the learning most efficiently, as well as understanding what may have caused the trends found.
The context here is that due to an historic lack of diversity and inclusion, other demographics are not as well represented by the available data. Even Amazon mothballed its own AI recruitment algorithm, as the legacy CVs it used were found to be heavily originated from men, resulting in an inadvertent bias against women.
The ability to see different problems through different perspectives is one of the most powerful tools in the drive towards developing an increasingly globalised business. Doing so also helps reduce bias in the decision-making process.
In November 2018, there were five billion consumers who interacted with data. By 2025 that number is predicted to increase to six billion, or 75 per cent of the world’s population. The ability to understand and properly utilise this burgeoning wealth of information is an absolutely vital attribute to remain competitive going forward. This data resource, however, needs context, ethics and human intelligence when applied to AI as there are often variables at play that only humans can interpret and understand.
The core requirements when integrating an AI platform for D&I purposes are twofold. First, the requirement that the AI applications are used ethically. Secondly, that the AI programme itself has been designed in a way that minimises inherent bias and is able to be used ethically. When it comes to AI, development priorities such as ethics, shareability, scalability and security can often be considered as peripheral to the core development goals of building a functional product and getting it to market quickly.
Strong steps are being taken in this area. The UK is one of the first countries in the world to legislate an algorithmic transparency standard to ensure bias is stripped from algorithmic decision-making. As well as a description of the tool – including how and why it is being used – it requires a focus on the datasets used to train the models, the level of human oversight, and how the tool itself functions. Perhaps the most important piece not being covered in this standard is to ensure diverse teams are part of the creation and review process when the algorithms involve areas that can impact diversity and inclusion or human life in general.
The human factor is simultaneously the greatest strength and the greatest weakness of artificial intelligence. AI is missing the ability to have empathy, to use emotional intelligence, and we need to ensure there are humans in the process to provide this guidance. To truly unlock the value of AI, we need to see a combined approach - one where ethical AI development is integrated concurrently with deliberate and far-reaching employee upskilling campaigns. A business developed on a foundation of multiple, diverse viewpoints is far more prepared to thrive in today’s hyperglobal environment.
Alan Jacobson is chief data and analytic officer at Alteryx
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.