
Beyond Asimov’s three laws: the case for an International AI Agency
Image credit: ProductionPerig/Dreamstime
Effectively regulating artificial intelligence on a global scale will require countries to agree on a framework they can all stick to. Efforts to police research in nuclear technology could provide a good starting point.
Earlier this year, the European Union proposed a draft regulation to protect the fundamental rights of its citizens from certain applications of artificial intelligence. In the USA last month, the Biden administration launched a taskforce seeking to spur AI innovation. On the same day, China adopted its new Data Security Law, asserting a stronger role for the state in controlling the data that fuels AI.
These three approaches – rights, markets, sovereignty – highlight the competing priorities as governments grapple with how to reap the benefits of AI while minimising harm.
A cornucopia of proposals offers to fill the policy void. For the most part, however, the underlying problem is misconceived as being either too hard or too easy. Too hard, in that great effort has gone into generating ethical principles and frameworks that are unnecessary or irrelevant, since most essentially argue that AI should obey the law or be ‘good’. Too easy, in that it is assumed that existing structures can apply those rules to entities that operate with speed, autonomy, and opacity.
Personally, I blame Isaac Asimov and his frequently quoted three laws of robotics. They’re good science fiction, but if they had actually worked then his literary career would have been brief.
The future of regulating AI will rely on laws developed by states and standards developed by industry. Unless there is some global coordination, however, the benefits of AI – and its risks – will not be equitably or effectively distributed.
Useful lessons can be taken here from another technology that was at the cutting edge when Asimov started publishing his robot stories – nuclear energy.
First, it is a technology with enormous potential for good and ill that has, for the most part, been used positively. Observers from the dark days of the Cold War would have been pleasantly surprised to learn that nuclear weapons were not used in conflict after 1945 and that only a handful of states possess them the better part of a century later.
The international regime that helped ensure this offers us a possible model for the future global regulation of AI. The grand bargain at the heart of President Eisenhower’s 1953 ‘Atoms for Peace’ speech and the creation of the International Atomic Energy Agency (IAEA) was that nuclear energy’s beneficial purposes could be shared with a mechanism to ensure that it was not weaponised.
The equivalent weaponisation of AI – either narrowly, through the development of autonomous weapon systems, or broadly, in the form of a general AI or superintelligence that might threaten humanity – is today beyond the capacity of most states. For weapon systems at least, that technical gap will not last long.
Another reason for the comparison is that, as with nuclear energy, it is the scientists deeply involved in AI research who have been the most vocal in calling for international regulation. The various guides, frameworks and principles that have been proposed were largely driven by scientists, with states tending to follow rather than lead.
As the nuclear non-proliferation regime shows, however, good norms are necessary but not sufficient for effective regulation.
Of course, the limitations of an analogy between nuclear energy and AI are obvious. Nuclear energy involves a well-defined set of processes related to specific materials that are unevenly distributed; AI is an amorphous term and its applications are extremely wide. The IAEA’s grand bargain focused on weapons that are expensive to build and difficult to hide; weaponisation of AI promises to be neither.
Nonetheless, some kind of mechanism at the global level is essential if regulation of AI is going to be effective.
Industry standards will be important for managing risk and states will be a vital part of enforcement. In an interconnected world, however, regulation premised on the sovereignty of territorially bound states is not fit for purpose. A hypothetical International Artificial Intelligence Agency would be one way of addressing that structural problem.
The biggest difference between attempts to control nuclear power in the 1950s and AI today is that when Eisenhower addressed the United Nations, the effects of the nuclear blasts on Hiroshima and Nagasaki were still being felt. The Soviet Union had tested its own devices and the knowledge seemed likely to be spread to others – perhaps all others.
There is no such threat from AI at present and certainly no comparably visceral evidence of its destructive power. Absent that threat, getting agreement on meaningful regulation of AI at the global level will be difficult.
Even if it is difficult to create a global institution to prevent the first true AI emergency, it may be impossible to prevent the second.
Simon Chesterman is dean of the National University of Singapore Faculty of Law, and senior director of AI governance at AI Singapore. His latest book, ‘We, the Robots? Regulating Artificial Intelligence and the Limits of the Law’, is published by Cambridge University Press.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.