Ethics must ‘take centre stage’ in AI development, warns parliamentary report
Image credit: Dreamstime
Putting ethics at the core of artificial intelligence (AI) in coming years could help boost the UK economy and allow the country to be a world leader in the sector, a report by the House of Lords Select Committee on AI has said.
“The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences,” said Lord Tim Clement-Jones, Liberal Democrat peer and chairman of the committee, in a statement.
According to the report, AI in the UK: Ready, Willing and Able?, AI will have an enormous impact on the nature of work in the UK, with some jobs disappearing, some being enhanced, some yet unknown jobs being created and many more changing. In order to cope with this change, the committee has recommended that early education in AI and retraining will become a necessity throughout life in order to allow citizens to “flourish mentally, emotionally and economically alongside [AI].”
Among other approaches, this could involve the creation of a state-run service to provide free education throughout life, as proposed by Leader of the Opposition Jeremy Corbyn at the Labour Party conference in September 2017: a ‘National Education Service’.
The report also suggests that children should learn about ethical design and use of AI tools at school, with the subject becoming an “integral part of the curriculum”.
In their report, the committee focused on examining how AI can be adopted with an ethical and controlled approach in the UK, suggesting five guiding principles for this adoption: AI should be developed for the “common good and benefit of humanity”; it should operate with intelligibility and fairness; it should not come at the expense of privacy; education is required to allow citizens to live with advances, and AI should never have the capability to “hurt, destroy or deceive human beings”. The committee have called for these five principles to serve as the heart of a ‘cross sector AI code’ which could be adopted nationally and even internationally.
The development of AI-equipped weapons has been a source of fear worldwide, with ongoing UN meetings dedicated to discussion of a worldwide ban on lethal autonomous weapons.
Meanwhile, the dependence of machine learning – the computational methods serving as the basis for many AI applications – on huge amounts of data has led to concerns over the security of individuals’ personal data. These fears have been exacerbated following reports of mass data harvesting for the purposes of developing political adverts based on personality profiles of Facebook users.
The report recommends that data-gathering methods must change such that individuals have fair access to their data and the ability to protect their privacy and agency. This requires new legislation, as well as a move against monopolisation of data by tech giants, beginning with a review into the use of data by the UK competition watchdog, the report recommends.
“I think our conclusions, although they were written before the Cambridge Analytica issues came to the fore, they pretty much cover all [the subsequent concerns]. One of the aspects we are particularly interested in is data monopolies, potential data monopolies and the influence of the platforms and big tech, which we have covered to a large degree,” Clement-Jones told E&T.
The peers have also warned against AI systems displaying bias due to past and present prejudices being “unwittingly” built into these systems, suggesting that datasets used in AI should be audited for bias. For instance, a recent study suggested that most commercial facial recognition tools failed to identify women and people with darker skin. Recruiting and training more diverse groups of people to become AI specialists could also help prevent these biases, the report suggests.
According to Clement-Jones, ensuring that the future of AI in the UK is handled appropriately requires action across all policy areas; particularly important to consider is the impact of Brexit on European AI experts in British industry and academia, and the necessity of a “national retraining scheme” to ensure that citizens are not disempowered by AI. Clement-Jones believes that the current government is willing to treat AI as a priority, in spite of the likely expense of implementing the report’s recommendations.
“There’s always a resources issue, but quite frankly we know the government are treating this as a priority. Theresa May gave a major speech at Davos on AI; Matt Hancock is the digital minister and has always taken a strong line on ethical aspects of AI […]; we’ve got the government Office for AI already up and running, and of course the budget in November included quite a lot of extra resources in the field,” Clement-Jones told E&T. “I do feel reasonably confident that where it’s needed I think it’s going to be provided.”
“I think we’ve got a fair wind behind us. The question is: will the UK take the lead and can we take this forward and really get something powerful within a reasonable space of time?”