AI bias will ‘explode’ over next five years, IBM predicts
Image credit: IBM
IBM’s new ‘Life in 2023’ predictions foresee new roles for artificial intelligence in monitoring water quality and use of blockchain technology combined with ‘crypto-anchors’ to combat fraud in supply chains.
According to the computing giant, bias in artificial intelligence (AI) is set to explode in the next half a decade, but only unbiased algorithms and systems will survive and prosper.
The prediction was one of five ‘Life in 2023’ prophesies made by the American corporation as part of its inaugural Think conference, designed as a forum for industry leaders to share big ideas.
There have been a string of controversies about the use of predictive policing and offender management AI products in the USA, while an apparent lack of transparency on the part of the developers of criminal justice algorithms has worsened mistrust between tech firms and the public.
Critics have cited cases in which feedback loops allegedly developed, prompting police chiefs to deploy more officers into historically ‘overpoliced’ areas, leading to more crime being recorded as a result, which in turn leads to more resources being ploughed into policing the same ‘stigmatised’ neighbourhoods, ad infinitum.
In 2016, American investigative journalism website ProPublica published a detailed analysis showing how Compas, an algorithmic tool used for predicting defendants’ reoffending rates, was disproportionately likely to misclassify black defendants as future criminals.
In New Orleans, the existence of a hitherto unknown partnership between that city’s police department and CIA-founded AI vendor Palantir was revealed earlier this year, prompting claims of undemocratic institutional secrecy around use of this technology.
IBM, one of the prime movers in the field of AI, acknowledged that many current systems contain bias, but the firm struck an optimistic tone, claiming it would be possible to curb discrimination by giving machines a greater role in decision making.
“Identifying and mitigating bias in AI systems is essential to building trust between humans and machines that learn,” the corporation said in a statement. “As AI systems find, understand and point out human inconsistencies in decision making, they could also reveal ways in which we are partial, parochial and cognitively biased, leading us to adopt more impartial or egalitarian views.”
This stance echoes the position of Fei-Fei Li, an associate professor of Computer Science at Stanford University, who has said the best approach to countering misgivings about AI might be to embrace it and engineer it to be better, rather than to dismiss it as flawed.
Alexander Babuta from defence think-tank the Royal United Services Institute, who is currently preparing a new paper about the security uses of AI, told E&T that fears of a Minority Report-style criminal justice system developing were far-fetched and he predicted that AI would instead make society fairer.
“I don’t think this is ever going to replace human decision-making, merely augment and enhance it,” he added.
However, others have warned that automation could simply be used by governments as an excuse for cuts to public services.
Emma Williams, deputy director of the Canterbury Centre for Policing Research, told E&T: “There have been questions about how much this AI crime prediction stuff can really do, compared with what your local police crime analyst can do.
“Crime analysts are fundamentally underutilised in the UK and they are now being cut in various places and are probably being replaced with big data software options coming in from the US and being developed over here.”
As part of the launch of its Think conference, IBM has released an online test designed to provide insights into how machine-learning offender management models can weight different factors about individuals to reach police custody decisions about their likelihood of committing future crimes.
IBM’s other predictions include how increased trust will develop in supply chains, arising from the use of blockchain technology combined with “crypto-anchors”. These are described as “tamper-proof digital fingerprints” that can be embedded into products and linked to a shared ledger to provide unclonable identification, thus helping to combat fraud and protect consumers.
The company also predicts lattice cryptography will help defend against hackers and says that in five years from now autonomous AI microscopes, networked in the cloud and deployed around the world, will be routinely deployed to continuously monitor the condition of oceans and rivers as part of efforts to clean up natural resources. The company has previously hinted that a similar surveillance system could be developed for hunting down human traffickers.
The firm’s fifth and final prediction concerns quantum computing, which IBM foresees becoming “mainstream” in just half a decade’s time.