Should we trust machine learning?

Should we trust machine learning?

Image credit: Dreamstime

Machine learning plays a huge part in our lives, but as author Brian Christian asks in his new book, are these algorithms treating us fairly?

For better or worse, says Brian Christian, questions that link ethics and technology, particularly in the field of machine learning “are not going away. In some ways I see this as one of the defining challenges of the decade ahead of us.” By ‘this’ he is referring to the core subject of his new book ‘The Alignment Problem’, which tackles the question of how we can ensure that the growth industry of machine learning “is behaving in the way we expect it to. How do we make sure that we can trust it and that we are safe and comfortable?”

Machine learning, says the author, whose previous books have included ‘The Most Human Human’ and ‘Algorithms to Live By,’ “is the fastest-growing sub-field in artificial intelligence and one of the most exciting things happening in science today, full stop”. But addressing the ethical and moral issues that go with empowering machines to perform aspects of our decision making “can’t happen soon enough and we need to put as much human brain power into it as we can”.

Part of the reason for trust and safety being so prominent on the agenda is that “we’re living in a transformative time in terms of our relationship with technology”, says Christian. “This had been driven by the rise of machine-learning systems that, rather than being explicitly programmed (‘if x then y’), are essentially taught by example. So, here’s 10,000 pictures of a cat, here’s 10,000 pictures of a dog. Figure out how to tell the difference.” Software of this kind, he continues, “is steadily replacing human judgement as well as traditionally written software of a more familiar kind”. Given that systems such as this are starting “to permeate society at almost every level, this raises the concern of whether the machines are learning what we think they’re learning. Are they internalising these human norms and concepts in the way we meant them to? And, most importantly, will they run in the way we expect them to?”

These questions have become dominant concerns in the AI research community over the past five years, says Christian, and with the first generation of machine-learning graduates starting to emerge from their university studies, he sees this moment as “the first wave of first responders arriving at the scene”. He admits that the concerns are hardly new (“they’re just more pressing today”), and elaborates by tracing the relationship between ethics and computer science to at least as far back as the 1960s, when the MIT cyberneticist Norbert Weiner published a paper on the moral and technical consequences of automation, in which he introduces the analogy of the ‘sorcerer’s apprentice’, in which disaster follows despite our best intentions.

“What Weiner is saying is that the future of automation is not the world of fairy tales. This is actually coming to us.” Christian then quotes flawlessly from the paper: “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire...” These words, says Christian are “hauntingly prescient for us now as we enter into this period of machine learning”. If you wanted to sum up one of the central concerns of ‘The Alignment Problem’, you’d be hard pressed to do it better.

We read it for you

‘The Alignment Problem: How Can Artificial intelligence Learn Human Values?’

Brian Christian’s latest discusses ethics and safety in the machine-learning sub-field of artificial intelligence. Based on more than one hundred interviews with key players working on the front line of machine learning, ‘The Alignment Problem’ assesses the impact of this technology on the social, civic and ethical aspects of our lives. Christian sees machine learning as one of “the most transformative, if not the most transformative” development in science today. From the news we consume to deciding whether we get a car loan; from dynamic marketing to whether we are bailed or detained after being arrested, algorithms crunch away in the background determining our future. But, asks Christian in a superb analysis of machine learning and society, how can we be sure these systems are acting fairly and safely?

As regular citizens we may not be aware of just how much machine learning goes into our daily lives, says Christian. But if you’ve ever taken a photograph on a smartphone (the entire human population takes approximately 1.5 trillion digital photos each year) then you’re interacting with machine-learning pipeline in the background that’s trying to identify faces, compensate exposure, adjust focal length, compositing multiple exposures and so on in real time. This might seem harmless, concedes Christian, “but it is a very good example of how these systems have invisibly inserted themselves between us and the world”.

It all gets more serious when he draws attention to other examples of machine learning that have the potential to create significant impact on an individual’s future and safety. These systems are used in loan and mortgage applications and in driverless vehicles. But perhaps the area that is exposed to the most ethical dilemmas is in how machine learning is used in risk assessment for determining whether you will be granted bail or be held on remand after you are arrested. Given that there are ten million arrests in the US per year (and 670,000 in the UK), it’s easy to see why some sort of automated assistance is needed in processing the detainees. Statistics such as these explain why “AI has gone from being a hypothetical thing to being the central scientific research topic of the field”.

In one of the most fascinating passages of ‘The Alignment Problem,’ Christian examines closely how machine learning is used in judicial processing of individuals. Before we even get to the point where we discuss how this can all go wrong in terms of racial or gender bias – he points out that there is a fundamental human error in failing to differentiate between arrest and crime statistics, the implication being that if you’re arresting the wrong people there may not be the directness of correlation between the two figures that the public would instinctively expect. He cites examples of how failure in machine-learning systems create negative outcomes for individuals, while the judiciary relying on the technology are unlikely to understand them and even more unlikely to be trained in their use. Even if they were, “the systems have a reputation for being ‘black boxes’”, he says. In other words, we have inputs and outputs and we can adjust the parameters, but “we don’t really know what’s going on inside”.

The good news is that “as we speak there are advances being made in the technology that is already out there in the world. There’s been a lot of research into how we can structure this to make it more transparent, so that we can more confidently understand what these systems are really doing.”

‘The Alignment Problem: How Can Machines Learn Human Values?’ by Brian Christian is from Atlantic Books, £20.


Defining the ‘problem’

Machine learning is an ostensibly technical field crashing increasingly on human questions. Our human, social and civic dilemmas are becoming technical. Our technical dilemmas are becoming human, social and civic. Our successes and failures alike in getting these systems to do ‘what we want’ offer us an unflinching, revelatory mirror.

As machine-learning systems grow not just increasingly pervasive but increasingly powerful, we will find ourselves more often in the position of the ‘sorcerer’s apprentice’: we conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realise our instructions are imprecise or incomplete – lest we get, in some clever, horrible way, precisely what we asked for.

How to ensure that these models capture our norms and values, understand what we mean or intend, and above all, do what we want – has emerged as one of the most central and urgent scientific equations in computer science. It has a name: the alignment problem.

In reaction to this – both that the bleeding edge of research is getting ever closer to developing so-called ‘general’ intelligence, and that real-world machine-learning systems are touching more ethically fraught parts of life – has been a sudden, energetic response. A diverse group is mustering across traditional disciplinary lines. Leaders in industry and academia are speaking up to sound notes of caution and redirect their research funding accordingly. The first generation of graduates is matriculating who are focused on the ethics and safety of machine learning. The alignment problem’s first responders have arrived at the scene.

Edited extract from ‘The Alignment Problem: How Can Artificial Intelligence Learn Human Values?’ by Brian Christian, reproduced with permission.


Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles