Review

Book review: ‘Life 3.0: Being Human in the Age of Artificial Intelligence’

Why planning how we cope with superhuman artificial intelligence is the most important conversation of our time.

Imagine that everyone in the world over the age of five apart from you has died suddenly. The surviving infants have trapped you in a cell and decided that the best way to restore humanity to the planet is to keep you there in the capacity of an advisor. Suspicious that if they let you loose you’ll probably try to take control yourself, they’re extremely careful about how they communicate with you.

The tyrannical toddler scenario sounds like one that must have been done in science-fiction at some point, but is revived to great effect by Max Tegmark, a professor of physics at MIT and president of the Future of Life Institute, to help illustrate how an artificial intelligence (AI) would ‘feel’ in ‘Life 3.0: Being Human in the Age of Artificial Intelligence’ (Allen Lane, £20, ISBN 9780241237199).

Once we’re past the tipping point where AI is capable of redesigning itself in a much more efficient way than humans can, the now familiar argument goes, its capability will grow exponentially and could threaten the humans who created it.

Hence what Tegmark describes as “the most important conversation of our time” – how we prepare for this situation. Letting the reader imagine being an adult at the mercy of children plotting how to exploit their superior intelligence is one way in which he flips the situation so we think about things from the machine’s point of view, maybe anticipating how it would go about achieving its liberty.

Tegmark admits he has no idea what will happen if we succeed in building human-level machine intelligence. The point of ‘Life 3.0’ isn’t to provide a cast-iron solution but to get us thinking.

Beyond a sceptical tutting at stories about ‘the rise of the robots’, often illustrated with a killer android from one of the ‘Terminator’ movies, few people will have given much thought to which of Tegmark’s three branches of thought they subscribe to. The digital utopians and techno sceptics share the idea that there’s really nothing to worry about – the former because they’re confident that checks and balances and the glacial rate of progress mean we’ll gradually work things out as we go along and end up living happily together, the latter simply because they believe the singularity tipping point at which machine intelligence outstrips that of humans is so far off it’s hardly even a dot on the horizon.

Then there are the members of the beneficial AI movement, a more considered bunch who don’t think there’s any imminent danger, but that the potential risks are significant enough and likely enough that we should be thinking about them now.

That idea is the principle behind ‘Life 3.0’, which asks readers to think about what sort of future they want while there’s still time to shape it. At a time when natural disasters are dominating the news agenda and the world seems to be pulling back on tackling climate change, the reality of super-intelligent machines that may be decades or even centuries away could seem one thing too many to worry about. If we’ve learned anything from the way the internet has come to permeate so much of everyday life in such a short time, though, it should be that we can no longer predict what’s going to happen.

If you think you don’t really care, consider how you’d feel if you were knocked over by one of the self-driving cars that are expected to become a common sight on our roads before too long. Not a superhuman intelligence, but effectively an AI on wheels whose actions are the result of programming by myriad designers but which will have some capacity for learning.

How will the courts treat cases like this, and who will be ultimately responsible? The fact that situations like this are already on the agenda signals just why the longer-term issues covered by ‘Life 3.0’ need to be thought about before it’s too late.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close