The wild west of AI regulation revealed in Westworld remake

The new Westworld paints an artificial intelligence (AI) world stuffed with differing agendas – not unlike its real world equivalent.

With the $100m Westworld remake reaching the UK this evening (October 4) on Sky Atlantic, some Zeitgeist points are due. It looks to tell a tricky tale at a time when AI is caught up in a complex stage of its real evolution.

Last week, five of the most powerful technology companies – Amazon, Facebook, Google, IBM and Microsoft – announced the Partnership on Artificial Intelligence to Benefit People and Society  (PAI). It aims to explain AI and its potential benefits to the public, and develop both standards and best practices for ongoing research. PAI is open, so expect to see other key players such as Apple, Baidu, Intel and Nvidia involved soon enough. Everyone remains busy.

Going back to Baidu, for example. It just unveiled an open-source AI benchmarking tool, DeepBench, aimed at assessing the performance of deep learning systems on different hardware platforms. It also set up a $200m AI venture capital fund.

Regardless of fears about killer robots and Skynet, AI is very much among us, although it has plenty of growing up to do. Smartphone personal assistants and standalone devices like the Amazon Echo are bidding to be seen as useful little helpers. Meanwhile, behind the scenes, AI advances are also used to sift and make sense of 'big data'.

Given all that, another factor you suspect has driven PAI's establishment is the prospect of government regulation. Washington-based website Politico has already said that the White House plans to lay out "its vision for potential regulation" sometime this Autumn. It will follow a consultation on AI launched earlier this year.

PAI will advocate for a light touch, likely echoing the conclusion of a recent study from Stanford University: "New or retooled laws and policies will be needed to address the widespread impacts AI is likely to bring. Rather than 'more' or 'stricter' regulation, policies should be designed to encourage helpful innovation, generate and transfer expertise, and foster broad corporate and civic responsibility for addressing critical societal issues raised by these technologies."

However, will – or indeed can – Washington take a laissez-faire approach? The public has growing concerns over privacy. The transformative aspects of AI could widen the perceived 'digital divide' between different strata of civil society. Yet perhaps the most influential factor will be national security.

We are a long way from Hollywood's nightmare of defence infrastructure under the control of a cranky Skynet or W.O.P.R. Indeed, the problem today is the extent to which AI is a technological space where many of the goals of commerce and the military converge, perhaps too closely for the latter's comfort.

After all, who wants to be best at, say, profiling? The National Security Agency (or, for that matter, GCHQ, the Main Intelligence Directorate (GRU), etcetera) or Facebook (or, for that matter, Amazon, Baidu, etc)? The algorithms that will likely determine competitive advantage are going to be very similar.

Similarly, while the public perception of AI research is likely dominated by projects such as Watson at IBM or Google’s UK-based DeepMind subsidiary, much of the research and development (R&D) funding is coming from various branches of the military in various countries. DARPA, the US Department of Defence's advanced R&D agency, has been an especially active funder of Silicon Valley AI start-ups, in return for which silence – or at least the utmost stealthiness – has been required.

Of course, commerce and the military have been through a number of innovation waves where their requirements have coincided. That is why governments have technology transfer requirements. Yet when it comes to AI and within that, deep learning specifically, the challenge has to some extent 'moved up the stack'.

Take semiconductors. Decades ago, innovation was primarily driven by military requirements, but more recently the PC, mobile communications and gaming sectors have led the way. Washington still monitors and controls access to a lot of this technology – particularly with regard to chip manufacturing – but its attitude has largely been governed by the sense that the application sitting on top of the silicon is ultimately the critical component. There was tension, but still scope to let the innovators innovate.

Once you start talking about AI innovation that addresses 'big data' as noted earlier, applications with commercial and military value could even prove to be identical. The resulting tension is self-evident.

Many suspect that, with a little over three months left in office, President Obama will open the AI regulation debate and set the tone, but acknowledge that there simply isn’t enough time left for his administration to resolve the issue. That will fall on his successor's plate.

All this, in its own strange way, brings us back to Westworld. It is not a massive spoiler to say that, after its debut this weekend on HBO in the US, critics noted that the series appears to be pregnant with a wide range of agendas (at least four, by my count). Those are also going to take some time to resolve (several seasons worth of time, if HBO gets the ratings it wants). In the real world, though, things may need to move a little more quickly.


Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them