In Ex Machina, the robot Ava (played by Alicia Vikande) possesses the world's first true artificial intelligence

Ex Machina movie asks: is AI research in safe hands?

With the release of the movie 'Ex Machina' in mind, concerns over the existential threat posed to humankind by artificial intelligence reached a fever pitch in 2014. Are the fears justified or are there more pressing concerns?

Man’s desire to follow in the footsteps of his creators and build something in his own image has a long literary history. From the animated statues of Greek myth, to the clay Golem of Jewish folklore, to Mary Shelley’s Frankenstein, alchemists, mystics and mad scientists have long been engaged in the quest for artificial sentience.

Nowadays it’s billionaire tech entrepreneurs who are tinkering where only gods should pry. At least that’s the story told by the forthcoming film 'Ex Machina', which introduces Nathan, the brilliant but reclusive CEO of the world’s largest web-search company who has lost sight of his morals in an obsessive quest to create human-level artificial intelligence (AI).

A young coder called Caleb who works at Nathan’s company is summoned to his boss’s mountain retreat after winning a competition to meet the secretive genius. But it soon becomes clear that he is really there to take part in a modified Turing Test to find out whether he can be convinced that a beautiful female robot called Ava is conscious despite knowing that she is artificial.

The film descends into a psychological thriller that explores many of the issues that the long-heralded advent of human-level AI will pose for mankind. Can a machine truly think or only imitate the process? Is a machine capable of love, sexuality and duplicity? What rights would a conscious machine have?

But perhaps the most prescient aspect of the film – a directorial debut for scriptwriter Alex Garland of 'The Beach', '28 Days Later' and 'Sunshine' fame – is the character Nathan. According to Murray Shanahan, a leading cognitive roboticist at Imperial College London and a technical adviser on the film, Garland showed him the finished script in early 2013, just as a prolonged drive of AI and robotics acquisitions by Silicon Valley companies got under way, many with charismatic geniuses at the helm.

The AI gold rush

Google has spearheaded this drive, with a particular focus on start-ups specialising in ‘deep learning’,  a sub-branch of machine learning that excels in areas considered crucial to the development of AI, such as computer vision, speech recognition and natural language processing. But with Facebook announcing the creation of an AI lab in 2013, IBM making steady progress with its Watson AI system, and both Microsoft and Chinese search giant Baidu heavily invested in AI, competition in the field is heating up.

The resulting AI arms race has understandably caught the attention of the media and, as is often the case with game-changing technologies, many have been quick to forecast doom. Concerns over the potential existential threat to humankind posed by super-intelligent AI have been aired by influential voices including physicist Stephen Hawking and  the founder of SpaceX and Tesla, Elon Musk, who has just announced a $10m donation to fund research into making AI safe.

On 2 January, leading lights of the field met for a secret three-day conference in Puerto Rico – organised by the Future of Life Institute (FLI) – which culminated in an open letter stating a commitment to ensuring that AI is beneficial to humankind. It was signed by a host of AI heavy-hitters, including Peter Norvig, Google’s director of research, and Yann LeCun, Facebook's head of AI.

The hush-hush nature of the meeting was designed to ensure that participants could speak freely without upsetting their employers or being misquoted by the press, but it is sure to play into a perception of secrecy at the bleeding edge of AI research. As such, Garland’s characterisation of an enigmatic champion of Silicon Valley, driven close to madness in his compulsive quest, taps into a growing sense of paranoia around the stewardship of AI technology.

“I think it is very prescient of Alex to set this thing in the country retreat of a tech billionaire,” says Shanahan. “People are asking themselves what Google and other big companies are up to with AI and we don’t know and we’d quite like to know.”

Don’t believe the hype

So are the fears of an AI-induced Armageddon justified? According to Shanahan, a signatory on the FLI letter, the main mistake made by the media – whether wilfully or not – was overplaying the immediacy of the threat. While most in the AI community agree such a scenario is plausible and welcome research into its implications, even the most optimistic predictions are decades away. Huge hurdles need to be overcome before machines reach intelligence in any way comparable to our own, let alone super-intelligence.

Deep learning is in vogue right now and many, including some hard-nosed business people, are confident it holds the key to machine intelligence. But despite its promise Shanahan says it is probably only one of a series of components required for human-level AI: “It’s not so much that they’re barking up the wrong tree, but that they need a whole pack of dogs barking up loads of different trees to test the metaphor to destruction!”

A 2013 paper from the University of Oxford from machine learning expert Michael Osborne and economist Carl Frey found that rapid advances in AI meant that up to 45 per cent of American jobs are at high risk of being automated within 20 years. The paper also found several major bottlenecks to the realisation of human-level AI.

Machines are still poor at interacting with unstructured environments and manipulating objects. Robots have found their way into factories and warehouses where roles and boundaries are firmly defined, but carrying out simple housework in a cluttered family home is beyond the most intelligent machines. Even Google’s driverless car, the poster boy of autonomous technologies, requires roads to be extensively mapped by a special sensor vehicle.

Computers are also a long way from achieving the social intelligence essential for dealing with humans, as evidenced by the countless failed attempts to pass the Turing Test. And despite the culinary exploits of IBM’s Watson, which has been coming up with new recipe ideas to mixed reviews, machines struggle to exhibit the kind of creativity that underlies humankind’s great achievements. “Creativity is different from coming up with novel instances,” says Osborne. “The difficulty with creative tasks is understanding human utility.”

“AI is fundamentally difficult,” says University of Montreal professor Yoshua Bengio, one of the founding fathers of deep learning alongside Facebook’s LeCun and Geoffrey Hinton, who now works part-time for Google. “I think it’s good for researchers to acknowledge that they have to take these questions seriously and I think there is a responsibility to answer these questions. But you have to realise this is a very long-term thing. There’s no immediate danger.”

He says the fear generated by excessive attention to the long term impacts of AI research is unhelpful and overshadows the positive role of the big tech firms. They have brought new energy to the field internally, through academic grants to keep a steady pipeline of talented AI experts flowing, and access to datasets far richer than any academic could hope for.

But while he doesn’t lie in bed worrying about Skynet, Bengio does have concerns about some of the applications of AI. “It’s not so much computers becoming smarter than humans, but more how humans use these things for controlling people,” he says.

Clear and present danger

The focus on the long-term existential threat of AI by the media has obscured a host of other concerns that are far more pressing. In 2013, the revelations of former National Security Agency contractor Edward Snowden showed that advances in machine learning and big-data techniques have vastly expanded the ability of governments to monitor their citizens. Further advances in AI technologies such as natural language comprehension and image recognition will only increase surveillance powers.

AI technology is also increasingly being deployed by the defence industry. James Barrat, author of the 2013 book on AI ‘Our Final Invention’, says that 56 nations are currently developing battlefield robots. “While we’re only just working out how to make AI safe, other people are working out how to use it to kill people.”

“I think people are realising it’s a dual-use technology like nuclear fission. It’s capable of great good and great harm,” he adds. “What is happening in this industry is rapid product development and in that innovation runs ahead of stewardship.”

As well as the more direct harm that AI could do in the wrong hands is the prediction from Osborne and Frey: the societal upheaval from an economic revolution. What happens when half the world’s jobs are made redundant? How do you ensure that the benefits of AI are distributed evenly and don’t simply make the rich richer? How do you repurpose a centuries-old legal system to incorporate autonomous machines?

Perhaps most important are debates on how to program ethics into machines that could be making decisions. They could choose whether or not a family is granted a mortgage, what the best course of treatment is for a patient, or how an automated vehicle weighs a high probability of costly material damage against a low probability of injury to a human.

“If algorithms are making important decisions about people’s lives and they are drawing on data on race or gender, or correlating, then we need to make sure these algorithms can make the right ethical decisions, even if they’re not super-intelligent,” says Osborne.

The challenge of implementation

If we assume an ethical and legal framework for AI can be agreed upon, implementing it may provide a far greater challenge. With its open letter, FLI released a document highlighting research priorities in the quest to make AI beneficial for humankind. The bulk of it deals with the issues of validity, verification, security and control – areas of AI robustness research essential to ensuring the technology does what its creators intended.

Seán Ó hÉigeartaigh, who manages The Cambridge Centre for the Study of Existential Risk (CSER), points to the example of algorithms deployed by two Amazon book sellers in 2011, whose unexpected interaction pushed the price of a book as high as $23m before the vendors realised what had happened. “We can design a system that looks pretty sensible, but because of interactions that we didn’t expect or assumptions embedded in the system that didn’t hold, it ends up doing something unexpected,” he says.

Even translated into moderately powerful AI systems, this issue could lead to disastrous consequences and as AI becomes more complex, predicting their responses will be harder. Viktoriya Krakovna, a statistician and one of the founders of FLI, was disappointed at the media’s focus on the dangers of a malevolent AI in their coverage of the open letter: “A robot uprising, I don’t think that’s very realistic,” she says. “What is more realistic is a mis-specification of an AI that is trying to do good.”

Safe hands

Dealing with these issues will require coordination across a broad range of disciplines, but there is fear in the AI community that ill-conceived interventions from policymakers could not only hamper progress, but also do more harm than good. History is littered with examples of the unintended harm done by prohibition, and British Prime Minister David Cameron’s recent call to curtail encryption technology suggests lessons have not been learnt.

Some of those in the community say initiatives like FLI’s open letter and Google’s creation of an ethics board to oversee the work of its DeepMind – its AI subsidiary – signal a willingness within the industry to confront the issues. Krakovna says FLI’s meeting was, to an extent, modelled on the 1975 Asilomar Conference on Recombinant DNA – a landmark meeting that saw the biomedical community come together and create guidelines to address potential biohazards presented by the technology.

“In the case of AI safety it’s much less clear whether there are any particular research directions that are particularly dangerous at this point,” she says. “But in both cases, the basic idea is that researchers in the field are responsible people who want to make sure that their work is beneficial to the world.”

The big tech companies leading the field are certainly far more open than traditional industrial labs, Imperial’s Shanahan points out. Their researchers regularly publish preliminary findings on pre-print research repository arXiv, while Google, Facebook, IBM and Microsoft all have established university partnerships.

But many have shaky records when it comes to privacy and competition, which suggests their motives are not always squeaky clean. With the impacts of AI also likely to stretch to domains, the community has little understanding that they may not be able to coordinate the holistic solutions that will be necessary.

“I don’t know if expecting self-regulation is somewhat naïve and idealistic,” says CSER’s Ó hÉigeartaigh. “I think the final answer is going to be some kind of compromise between hard regulation from government and soft self-regulation.”

What regulation should look like is difficult to say. Oxford’s Osborne believes that professional certification for machine learning and AI workers – similar to that required of lawyers, accountants and doctors – could help maintain standards. Barrat favours a public-private partnership modelled on the International Atomic Energy Agency to bring together experts and policymakers. He also believes that starting regulation now on autonomous battlefield robots will provide valuable lessons for regulating other areas of AI in the future.

Whatever the answer is, the issues are “right here, right now”, says Ó hÉigeartaigh, so efforts to solve them need to start straight away. First and foremost will be education – if the community wants to avoid the Terminator headlines and knee-jerk reactions from politicians, they need to make it clear where the real dangers lie.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close