Welcome Your IET account
Deepfake video featuring Jordan Peele and Barack Obama

Deepfake deception ranked as most concerning AI crime

Image credit: Getty

UCL researchers have identified 20 ways in which AI could be used to facilitate crime over the next 15 years. When ranked in order of concern, AI-synthesised media was judged to pose the greatest potential to cause harm.

AI can be exploited for crime in various ways: as a tool for crime; as a target for crime; or as a context for crime. Computer scientists at London's UCL identified 20 emerging AI-enabled crimes from papers, news, fiction and popular culture. These crimes include using autonomous cars as weapons, spearphishing, disrupting AI-controlled systems, AI-synthesised fake news, and harvesting data for the purposes of large-scale blackmail.

These crimes were ranked by a group of 31 experts representing academics, defence experts, and law enforcement, over a two-day discussion period, on the basis of potential harm, potential for criminal gain, how easy they would be to carry out, and how difficult they would be to stop.

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation,” said senior author Professor Lewis Griffin. “To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

AI-synthesised audio or video content (often referred to as ‘deepfakes’) were ranked the most concerning, with the experts ranking it as extremely harmful, accessible, and difficult to defeat. Basic deepfakes are very easy to create with open-source tools, lowering the barrier of access to criminals without technical expertise.

Although deepfakes are popularly known as tools for satire or disinformation, they are also beginning to be used to impersonate ordinary civilians over phone or video calls to gain access to funds or secure systems. According to the UCL researchers, there are examples of criminals in Mexico using these techniques to gain access to funds. This sort of content could also lead to a widespread distrust of audio and visual evidence, causing further societal harm.

This week, a second manipulated video depicting US House Speaker Nancy Pelosi as inebriated gathered millions of views on Facebook. While Facebook fact checkers labelled the video as “partly false,” they declined to remove it from the platform.

Less concerning AI-enabled crime include misuse of military robots, sale of fraudulent 'snake oil' services under promoted as AI with a smokescreen of jargon, autonomous attack drones, AI-assisted stalking, and use of small autonomous robots to enter homes via cat flaps and commit burglaries.

“People now conduct large parts of their lives online and their online activity can make and break reputations. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity,” said first author Dr Matthew Caldwell. “Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”

Professor Shane Johnson, director of UCL’s Dawes Centre for Future Crimes, commented: “We live in an ever-changing world which creates new opportunities – good and bad. As such, it is imperative that we anticipate future crime threats, so that policymakers and other stakeholders with the competency to act can do so before new “crime harvests” occur. This report is the first in a series that will identify the future crime threats associated with new and emerging technologies and what we might do about them.”

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them