AI is a threat to global stability, warns Cambridge University report
Image credit: Dreamstime
Artificial intelligence (AI) could be used by rogue states to cause havoc and disruption, according to a new report from Cambridge University’s Centre for the Study of Existential Risk.
In a report titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, the university body warns that malicious manipulation of AI could create a destabilising effect and calls on governments and corporations worldwide to ensure that this does not happen.
It also warns of the rise of “highly believable fake videos” impersonating prominent figures or faking events to manipulate public opinion around political events.
The 100-page report identifies three security domains (digital, physical and political security) as particularly relevant to the malicious use of AI. It suggests that AI will disrupt the trade-off between scale and efficiency and allow large-scale, finely-targeted and highly-efficient attacks.
The authors expect novel cyber-attacks, such as automated hacking, speech synthesis used to impersonate targets, finely-targeted spam emails using information scraped from social media, or exploiting the vulnerabilities of AI systems themselves (e.g. through adversarial examples and data poisoning).
Likewise, the proliferation of drones and cyber-physical systems will allow attackers to deploy or repurpose such systems for harmful ends, such as crashing fleets of autonomous vehicles, turning commercial drones into face-targeting missiles or holding critical infrastructure to ransom.
It also warns about the rise of autonomous weapons systems on the battlefield, which risks the loss of human control and presents “tempting targets for attack”.
It suggests that academics and others should rein in what they publish or disclose about new developments in AI until other experts in the field have a chance to study and react to potential dangers they might pose.
Report co-author Dr Sean O hEigeartaigh [sic] said: “Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to 10 years.
“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems, because the risks are real.
“There are choices that we need to make now and our report is a call-to-action for governments, institutions and individuals across the globe.
“For many decades hype outstripped fact in terms of AI and machine learning. No longer.
“This report looks at the practices that just don’t work anymore and suggests broad approaches that might help. For example, how to design software and hardware to make it less hackable and what type of laws and international regulations might work in tandem with this.”
The report urges policy makers and researchers to work together to understand and prepare for how the technology could be used maliciously and calls for developers to be proactive and mindful of how it could be misused.
Those who contributed to the study include the Elon Musk-founded non-profit research firm OpenAI and international digital rights group the Electronic Frontier Foundation.
Several prominent technology figures, including Facebook boss Mark Zuckerberg, have previously spoken out in favour of artificial intelligence and the benefits it could bring.