Robot thinking with weapons in the background

AI advances could put human lives at risk, Prime Minister’s adviser warns

Image credit: Canva

Matt Clifford has warned against the potential of artificial intelligence (AI) tools to create cyber and biological weapons that cause many deaths.

Clifford, a member of the UK's AI taskforce, has stressed the need for regulations that prevent AI tools from becoming “very powerful” systems that humans could struggle to control.

Prime Minister Rishi Sunak's adviser commented on the technology's rapid development in an interview with TalkTV, where he stated that even the short-term risks that come with AI were “pretty scary”. 

“You can use AI today to create new recipes for bio weapons or to launch large-scale cyber attacks," he said. "These are bad things."

Since the interview was published, Clifford has stated headlines based on the TalkTV interview – which quoted him stating that AI could kill many humans within two years' time – do not reflect his views.

During the TalkTV interview, Clifford made a reference to an open letter signed by notable tech figures such as Elon Musk and Steve Wozniak, which warned that AI labs were locked in an “out-of-control race” and called for a six-month pause on all large-scale AI experiments. 

The letter was followed by the resignation of AI ‘godfather’ Geoffrey Hinton from his job at Google, warning that “bad actors” will use the new technologies to harm others and that the tools he helped to create could spell the end of humanity.

“The kind of existential risk that I think the letter writers were talking about is… about what happens once we effectively create a new species, an intelligence that is greater than humans,” he said.

Clifford was also asked on the 'First Edition' programme on Monday what percentage chance he would give that humanity could be wiped out by AI. He replied: “I think it is not zero.”

“If we go back to things like the bio weapons or cyber [attacks], you can have really very dangerous threats to humans that could kill many humans – not all humans – simply from where we would expect models to be in two years’ time," he added.

“I think the thing to focus on now is how do we make sure that we know how to control these models because right now we don’t.”

Clifford acknowledged that the prediction of computers surpassing human intelligence within two years was at the "bullish end of the spectrum", but said that AI systems are improving very rapidly.

In order to prevent the technology from causing additional harm, the expert said AI production needed to be regulated on a global scale and not only by national governments.

Clifford further stressed that, if used correctly, AI technologies could have a very positive impact in society, and become a force for good.

“You can imagine AI curing diseases, making the economy more productive, helping us get to a carbon-neutral economy,” he said.

Shadow digital secretary Lucy Powell told the Guardian that AI should be licensed in a similar way to medicines or nuclear power.

“That is the kind of model we should be thinking about, where you have to have a licence in order to build these models,” she said.

Paul Scully, minister for tech and digital economy, told the TechUK Tech Policy Leadership Conference on Tuesday (6 June) that there should not just be a focus on a “Terminator-style scenario”.

“If we get it wrong, there is a dystopian point of view that we can follow here," he told attendees. "There’s also a utopian point of view. Both can be possible.

“If you’re only talking about the end of humanity because of some rogue, Terminator-style scenario, you’re going to miss out on all of the good that AI is already functioning – how it’s mapping proteins to help us with medical research, how it’s helping us with climate change.

“We have to take breathing space to make sure we’re getting this right for the whole of society, as well as the benefit of the sector.”

Meanwhile, the Prime Minister’s official spokesman said the UK could become a global leader in artificial intelligence on both the new technology and regulatory systems.

“We are not complacent about the potential risks of AI," he stressed. "Equally, it does present significant opportunities for the people of the UK. The UK is looking to lead the way in this space. You cannot look to proceed with AI without having the right guardrails in place.”

With this goal in mind, the UK has begun designing ‘light touch’ regulatory frameworks regarding the safe use of AI.

Clifford is a member of the new £100m Foundation Model Taskforce, modelled after the Covid-19 Vaccine Taskforce, which will focus on the research and development of “safe and reliable” foundational models, a type of AI used by chatbots such as ChatGPT. He is also chairman of the Advanced Research and Invention Agency (Aria).

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles