A still from Skyfall

Book reviews

New books on how technology is changing the old business of spying, and the pros and cons of increasingly sophisticated artificial intelligence.

Espionage technology: the spies who came in from the cloud

Oxford University Press

In Spies We Trust: The Story of Western Intelligence

Rhodri Jeffreys-Jones, £12.99

Let me start with two short quotes from Rhodri Jeffreys-Jones’s new history of the UK-US intelligence relationship, ‘In Spies We Trust: The Story of Western Intelligence’ (Oxford University Press, £12.99): “Spies have always been detested...”, “Intelligence flaps exert a constant... appeal”. It may appear at first glance that these two statements contradict each other. Yet, having read some other recent additions to the ever-thriving genre of spy (or ‘intelligence’, if you wish) non-fiction, I could not help noticing a clear logical link between our innate dislike of spying, which almost inevitably presupposes cheating, and our dark fascination with (and keen interest in) that ancient profession and its mystery-clad practitioners.

To quote Jeffreys-Jones again: “in the 20th century espionage gained a certain respectability”. One major reason for that was the fact that spying had been gradually becoming more and more technology-oriented. The Bletchley Park codebreakers of the Second World War and the theft of the American nuclear secrets by Soviet spies are just a couple of examples. Indeed, no one can deny that the emergence of cables, radio, telegraph, telephones and computers made the spying profession more intellectually challenging and therefore a tad more respectable. “Surveillance has replaced terrorism as the ‘threat du jour’,” asserts Jeffreys-Jones.

Having recently attended a major international conference on the subject of electronic warfare (‘Old Crows Fly to Stockholm’, E&T September 2015) I can testify to the fact that modern military intelligence is largely conducted within the electromagnetic space and boils down to such dull-sounding activities as monitoring, jamming and intercepting electromagnetic signals. The bulk of present-day spying therefore is carried out by bespectacled geeks in white gowns rather than the hooded cloak-and-dagger knights and muscular Bond-like super-hunks of yesteryear.

Of course, it was not always just guns and poisonous umbrella tips. We all know about Alan Turing, who had managed to achieve one of history’s major espionage breakthroughs without being a professional spy himself. From ‘The Secret War: Spies, Codes and Guerillas 1939-1945’ (William Collins, £30) by the distinguished military historian Max Hastings, I also learned about Bill Tutte, the 24-year-old Cambridge mathematician who managed to break the Nazis’ top-secret Lorenz teleprinter ciphers – allegedly even more sophisticated than Enigma. “Tutte deserves to be almost as celebrated as Turing,” concludes Hastings in this comprehensive history of Second World War espionage.

Unlike the intelligence of 50-70 years ago, which rested on personalities like Richard Sorge, Joe Rocheforte and RV Jones, modern espionage is largely a collective effort. The transfer of intelligence operations to cyberspace, however, has failed to alter the negative stereotype. In ‘Cyberphobia: Identity, Trust, Security and the Internet’ (Bloomsbury, £20), Edward Lucas of The Economist writes about his own ‘cyberphobia’: the fear of an impending ‘digital Pearl Harbour’ or massive cyber attack by one nation against another with the resulting collapse of the latter’s whole infrastructure.

This is where modern-day, technocratic James Bonds can demonstrate their new electronic prowess. As BBC security correspondent Gordon Corera claims in his book ‘Intercept: The Secret History of Computers and Spies’ (Weidenfield & Nicolson, £20.00), the biggest intelligence challenges of the near future will be the so-called crypto wars – “privacy versus security, anonymity versus identifiability and the place of encryption.” The recent scandal with the massive hacking of the Ashley Madison dating site makes this scenario frighteningly believable. In Corera’s opinion, all computers – starting with the very first one to arrive at Britain’s War Office in 1929, allegedly, to help in the calculation of the trajectories of projectiles in artillery fire – were meant to be tools of espionage: “Computers were built to help people spy. Then they became the targets of spying. Soon they may be able to spy all by themselves.”

As we know, they already do, so I can be forgiven for spending my days in anticipation of a new New Cold War thriller – ‘The Spy Who Came in from the Cloud’ or something of that sort. When it arrives, I shall review it right here.

Vitali Vitaliev

MIT Press

The Technological Singularity

By Murray Shanahan, £10.95, ISBN: 9780262527804

With the current slew of doomsday predictions about super-intelligent machines, ‘The Technological Singularity’ provides a timely crash-course for the uninitiated. The red eye of HAL, 2001’s malevolent super computer, on the front cover is a subtle nod to the hysteria surrounding recent advances in the field, which has been balanced by an equally sensationalist dose of utopian futurology from the cheerleaders of artificial intelligence (AI).

As part of MIT Press’s ‘Essential Knowledge’ series, however, the aim of this book is not to make predictions or judgements, but to bring those just coming to the topic up to speed. It takes its name from the concept that humans are heading towards a point in time where machines will become so intelligent that our ability to comprehend their actions will break down and with it our ability to predict events, in much the same way as the laws of physics break down at the centre of a black hole.

While some reserve a special quality for human-level intelligence, the consensus is that, whether due to an AI arms race or simply the inexorable march of progress, this event is inevitable. How long it will take and how we will get there is up for debate though, with the path we choose likely to have major implications for the technology’s impact on our world. Shanahan gives a solid overview of the two main routes – digital emulation of a biologically-realistic brain or engineering one from scratch using machine learning – and why self-improving AI could mean a rapid transition from human-level capabilities to super intelligence. While brain-inspired AI might retain traits that give us some window into its operation, an intelligence designed from first principles could be quite alien.

The book highlights the dangers of anthropomorphising AI and identifies the three key questions that will govern its actions – what is its reward function, how does it learn and how does it maximise its expected reward. From this conceptual base Shanahan goes on to explore some of the key moral and existential issues development of AI will pose. Is an emulated brain conscious? Can you confer ‘personhood’ and its rights and obligations on an entity that can be replicated, split and merged? How can we engineer an AI that retains human values? Would a super intelligence have more rights than us?

The book divides the effect of AI into two distinct waves of disruption – the introduction of specialised AIs that will replace all but the most important jobs and potentially infantilise humanity by taking over all aspects of their decision-making; and the advent of super intelligence. The various ways we might try and shape and control these effects are discussed at length, but the book also highlights how difficult that could be by showing how even the soundest strategies are likely to collapse when faced with the unpredictability of AI. Shanahan makes it clear these events could be within our lifetime, meaning this topic needs to become part of the public discourse now.

The brevity of the book means some important topics, such as theory of consciousness, are given only superficial treatment, and as Shanahan concedes in his preface, despite his best attempts at neutrality his own opinions do creep in at points. His preference for biologically-inspired embodied AI betrays his background as a cognitive roboticist, while closing with an appeal to consider the possibility that creating AI is part of humanity’s cosmological destiny hints at his enthusiasm for the technology.

For those already well-read on the topic much of what is discussed will be familiar, but Shanahan’s presentation is succinct, comprehensive and commendably accessible for such a complex subject.

Edd Gent


The Rise of the Robots: Technology and the Threat of Mass Unemployment

By Martin Ford, £18.99, ISBN: 9781780747491

“I’ve been saying for years that employees are our most valuable asset. It turns out that I was wrong. Money is our most valuable asset. Employees are ninth.” It’s more than 20 years since the Dilbert cartoon appeared in which the long-suffering engineer’s pointy-haired boss spells out to his team the harsh reality of where their skills lie in terms of monetary value. And that they rank after the company’s stocks of carbon paper.

Today, money still rules the corporate roost, but a different kind of asset has joined the fray that could potentially make humans obsolete in the white-collar professions where they probably assumed their jobs were secure from technology. The threat from automata analysed by Martin Ford isn’t a carefully coordinated, Terminator-style Armageddon in which robots take over by force. The message here is that the threat comes not from ever more sophisticated machines, but from flexible artificial intelligence making them adaptable to more and more specialist disciplines that were previously the preserve of humans.

The coming years will see the middle class, as Ford puts it, “hollowed out” as workers are replaced by machines that work 365 days a year without needing to take holidays or ever getting sick. Ford describes it as the most important technological shift since machines started to displace manual workers in the industrial revolution. For him, it’s not the physical danger of laser-toting androids, but the risk to the balance of the economy when consumers buying goods and services aren’t actually making them themselves. The imbalance, he warns, could bring about massive unemployment and inequality that – coupled with climate change – could lead to the implosion of the global economy.

The good news is that Ford’s prescription for an economy in which humans and robots can exist successfully side by side consists largely of things most people would believe we should be doing anyway: raising educational standards – particularly in areas where human expertise is hard to replicate – and investing in infrastructure. If you’re sceptical, and feeling in a particularly masochistic mood, the BBC website has a page that uses data from Oxford University, the Office for National Statistics and Deloitte to estimate how at risk your job is. Put in your job title and it’ll give you the odds that within the next two years you’re going to be replaced by a robot.

If Martin Ford’s warnings prove correct, you might well not want to know.

Dominic Lenton

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them