Shiny blue brain

AI hackers could overcome even the hardiest of cyber security regimes

Image credit: Dreamstime

Artificial intelligence is being taught to compromise even the most locked-down cyber security setups by IBM researchers who warn that the technology allows for an unprecedented level of undetected infiltration.

Machine-learning techniques are used to build hacking programs that can slip past even the most rigorous cyber defences and lie in wait, undetected, until they reach a very specific target.

The research project is called DeepLocker and is being developed by IBM as a proof of concept, but the team behind it has warned that “evil” people are probably already working on using such systems in real attacks.

They also said that DeepLocker could be used to keep ransomware and other malware hidden from traditional security tools.

No one has yet boasted of catching any malicious software that clearly relied on machine learning or other variants of artificial intelligence, but that may just be because the attack programs are too good to be caught.

Researchers say that, at best, it’s only a matter of time. Free artificial intelligence building blocks for training programs are readily available from Alphabet Inc’s Google and others, and the ideas work all too well in practice.

“I absolutely do believe we’re going there,” said Jon DiMaggio, a senior threat analyst at cybersecurity firm Symantec Corp. “It’s going to make it a lot harder to detect.”

The malware could apparently be contained within legitimate applications and be almost impossible to detect until it is awakened and deployed when certain criteria are met. This could include events such as a certain user logging into a machine or the software being used on a specific device.

The AI malware could be installed within webcam conferencing applications, for example, and only activated when a certain prompt is recognised from the video feed.

IBM said that it has not seen technology such as this being used nefariously in the wild so far and that its project is simply designed to warn companies of this potential oncoming threat.

Currently, state-of-the-art defences generally rely on examining what the attack software is doing, rather than the more commonplace technique of analysing software code for danger signs.

“We have a lot of reason to believe this is the next big thing,” said lead IBM researcher Marc Ph. Stoecklin. “This may have happened already, and we will see it two or three years from now.”

At a recent New York conference, Hackers on Planet Earth, defence researcher Kevin Hodges showed off an “entry-level” automated program he made with open-source training tools that tried multiple attack approaches in succession.

“We need to start looking at this stuff now,” said Hodges. “Whoever you personally consider evil is already working on this.”

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close