Welcome Your IET account
GCHQ headquarters

UK intelligence could use AI to fend off AI-enabled threats

Image credit: GCHQ headquarters

A Royal United Services Institute (Rusi) study has concluded that there may be a “pressing” need to use AI tools to counter AI-enabled national security threats, such as polymorphic malware and synthetic media.

The study [available online as a PDF] was commissioned by GCHQ, with the aim of informing future policy on national security uses of AI. Its findings were largely based on consultation with law enforcement, private companies, academic and legal experts, and community representatives.

The authors concluded that intelligence agencies could deploy AI for cybersecurity purposes (such as to proactively identify abnormal traffic or malicious software and respond in real time) and to support humans in intelligence analysis (such as through natural language processing and image recognition).

While they noted that none of the AI tools they considered in their research could replace human judgement, they wrote that “augmented intelligence” systems could improve efficiency. These systems could be used to collect and analyse information from a wide range of sources, flagging up points of interest for humans to investigate.

The use of these AI and augmented intelligence tools will inevitably raise many questions for the UK intelligence community about privacy, machine bias and human rights, the report said.

“Despite a proliferation of ethical principles for AI, it remains uncertain how these should be operationalised in practice, suggesting the need for additional sector-specific guidance for national security uses of AI,” it said. This approach should be agile, allowing the intelligence community to adapt to the rapidly evolving technological and threat landscape.

The Rusi researchers also concluded that the requirement to incorporate AI more into intelligence work is perhaps most pressing when it comes to countering AI-enabled national security threats, such as by detecting these threats.

“Malicious actors will undoubtedly seek to use AI to attack the UK and it is likely that the most capable hostile state actors, which are not bound by an equivalent legal framework, are developing or have developed offensive AI-enabled capabilities,” they wrote.

“In time, other threat actors, including cybercriminal groups, will also be able to take advantage of these same AI innovations. The national security requirement for AI is therefore all the more pressing when considering the need to combat potential future uses of AI by adversaries.”

These AI tools could threaten the UK’s digital security, political security and physical security, the Rusi report said. They may include polymorphic malware (malware which changes its characteristics constantly to evade detection); the automation of social engineering attacks, and the generation of synthetic media such as deepfake videos for the purpose of manipulating public opinion.

AI tools could also be used to threaten physical security as the continued expansion of the IoT opens up more opportunities for large-scale disruption and damage, including potential attacks on critical national infrastructure.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them