Volvo radar illustration
Comment

Analysis: AI sector deal from UK government sidesteps the smart choices

Image credit: Volvo

AI is one of those areas where winter always seems to be coming.

AI is one of those areas where winter always seems to be coming. It has weathered more than a few short summers before harsh reality sets in and the IT industry moves onto something else. This time it might be different. The UK government believes so, having committed in one of its industrial sector deals more than £1bn to encourage the development of AI in business.

The good news is that there’s a sector deal for AI. The bad is news that the sector-deal document is light on detail and appears to be cut from the same cloth as the many think-tank reports that funnel into government. Those often make a big deal of promising fairness or choice to the citizen, but following up with an explanation that implies something far more regressive.

Take the section in the sector deal on data trusts. These sound like a good idea. Who doesn’t like trust? Yet the description is a little disconnected: “To address these issues, we are working with industry to pioneer mechanisms for data sharing such as data trusts. These frameworks will ensure that all parties involved have defined rights and responsibilities towards the data and individuals’ personal data, and other sensitive data, is protected. For example, the vision for data trusts is that they will allow two or more parties in any sector to partner in data-sharing agreements, shape the agreements according to their needs and enable multiple organisations to work together to solve a common problem.”

Notice how it starts with the rights of individuals, but then moves somewhere else? The underlying intention is more about making it easier for data processors to process each others’ catalogues of data without too much concern over what is in them - and even tweak the arrangements despite being nominally managed as a trust, a concept that implies things are nailed down beforehand. The utility of the data presumably makes those changes OK, I imagine. Everybody wants solutions to common problems, don’t they?

In reality, the data trust itself is pretty much orthogonal to the rights of individuals. Those go into things like laws based on the GDPR and the forthcoming ePrivacy directive from the EU, assuming the UK keeps either or both after 2019 – and those decisions will largely be informed more by a Brexit transition deal than by a conscious focus on the ramifications of AI. By making it easier to outsource the handling of data, there is potentially less scope for accidents to be made by companies with limited experience, but the trust concept itself is not in itself a protection. The role of trustees in corporate pensions has demonstrated that time and again.

The Centre for Data Ethics and Innovation seems similarly conflicted. The discussion of ethics does not go far beyond the name in the initial sector-plan document: “Finally, a Centre for Data Ethics and Innovation will be tasked with ensuring safe, ethical and ground-breaking innovation in AI and data-driven technologies.” It seems its role is more about innovation than the enforcement of ethical behaviour across industry. I’m all for ethical innovation. I worry more when the need has to be stated overtly. It is a little like those countries with the terms ‘democratic’ or ‘people’s’ in their names.

At the same time as promoting ethics in an indirect fashion, the sector deal gives entrepreneurs stronger incentives to drape an AI flag over their investments. If you have a “knowledge-intensive business” you can get a more beneficial taxation regime under arrangements like venture capital trusts. Such focused financial breaks do not often lead to sustainable outcomes. VCTs have been prone to exploitation as tax avoidance vehicles in the past and the latest sector deal may well have opened up a new loophole. I wonder what checks will be made on how “knowledge intensive” an investment vehicle is.

There is a reason why AI suffers from intense seasonality and why the sector deal is probably the wrong implement to make the UK a leader. With AI, it’s really easy to make mistakes. Bad mistakes. As the most recent issue of E&T explains, things get tricky when you try to let a machine learn for itself. The machine can do a surprisingly good job of teasing out relationships in the data you never realised were there - and with that, you are quickly reminded of the old joke: to err is human, to mess up royally you need a computer.

In E&T’s May issue, Rich Caruana of Microsoft Research has a salutory tale of how an AI can learn the opposite of what you want it to despite nothing being wrong with the data or the training. In one case, deployment of an AI-based model would have meant asthma sufferers at risk of pneumonia being turned away from hospital – the opposite of current practice.

The nature of machine learning means that it can be difficult to pick up on what went wrong. Some of the most successful technologies are black boxes. Caruana is keen to see more interpretable models being deployed in sensitive areas such as healthcare: “My experience is that many data sets have these landmines in them. Everything is right but it learns something that’s wrong. Working with intelligible models I see it all the time.

“The mistakes that machine learning makes can be very different to those that humans make. It’s an idiot savant with an incredibly narrow understanding of the world.”

Often, AI models will work well on test data, but fail when hit with real-world inputs. Again, the test data was not wrong, but it was probably far from complete. However, at such early stages of development of a technology, enthusiasm easily overtakes caution.

Caruana says: “Finally, for the first time, some significant fraction of the community is now paying attention to the need to better understand what AI is doing. It’s finally getting attention. But still the majority of people in the community don’t recognise the problems. I view my main job as being educating people about the risks. Every data set you play with has these problems. This is not a third decimal-place effect. This is real.”

The risk of moving quickly to embrace AI is that it creates a backlash and, with that, the start of a new winter for what is a useful technology. The answers lie not just in education but in regulation – something to which the current administration has an allergic reaction.

Not everything needs to be regulated. There are large databases that can readily be shared without compromising individual privacy. Intriguingly, those are mostly the examples in the sector-plan document. It neatly avoids spelling out the more sensitive things, such as medical data. Such data can be anonymised and shared. The research on differential privacy to limit the ability for personal data to leak out is now well advanced. This is where government needs to look closely at what is required.

As with IT before, the rewards of AI are such that they do not really need financial incentives, although these are the focus of any industrial sector deal. The benefits come in efficiency and service improvements. The incentives should be to get AI to the point where it works well for an organisation and to be more realistic about the difference between R&D that looks useful and real-world implementations that will quickly fall prey to internal biases and improper correlations.

It is perhaps something that is beyond what an industrial sector deal can achieve, but the industrial sector deal is the current hammer of choice for this government, so we will have to live with it.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close