Nuclear stability could be threatened by AI, report suggests
Image credit: Dreamstime
A report by the RAND Corporation has found that advances in artificial intelligence (AI) could shake up the conditions which ensure nuclear stability by 2040.
This threat does not arise from the possibility of autonomous nuclear weapons – particularly given a likely UN ban on lethal autonomous conventional weapons – but from threats to nuclear stability which could encourage human decision makers to take risks in order to ensure they are ahead of their rivals.
“The connection between nuclear war and [AI] is not new, in fact the two have an intertwined history,” said Edward Geist, a researcher at the RAND Corporation and an author of the report. “Much of the early development of AI was done in support of military efforts or with military objectives in mind.”
During the Cold War and throughout more recent international tensions, the threat of mutually assured destruction (MAD) – complete devastation caused by escalation from use of nuclear weapons – has prevented the apocalyptic use of these weapons, encouraging some stability.
However, the RAND report suggests that over the next few decades, AI-equipped sensor technologies could threaten this stability by wearing away the well-established conditions of MAD.
For instance, a state could use autonomous tools – as well as drones, satellites and other sensors – to identify and destroy retaliatory forces, such as nuclear submarines, before launching nuclear weapons of its own. States may pursue these AI capabilities in order to get ahead of their rivals, even if they do not have plans to carry out an attack.
According to the report – which was based on contributions from experts in nuclear issues, national security, government and AI – this could seriously undermine stability and raise stakes; even if a government possessing these AI technologies does not plan to use them, other governments cannot be certain of their intentions.
However, the report also suggests that AI could enhance stability by refining intelligence collection and analysis, reducing the risk of miscalculation or misinterpretation of data and subsequent aggressive action. The RAND Corporation researchers suggest that AI-equipped systems could eventually prove less error-prone than humans.
“Some experts fear that an increased reliance on [AI] can lead to new types of catastrophic mistakes,” said Andrew Lohn, an author of the paper and a RAND engineer.
“There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of instructions to help limit nuclear risk.”
Last week, researchers based at Massachusetts Institute of Technology proposed a secure method for verifying nuclear weapons without revealing design details, saying the approach could be used in nuclear disarmament efforts.