Global leaders at REAIM conference

Global summit calls for ‘responsible use’ of AI in the military

Image credit: The Netherlands Ministry of Foreign Affairs / Phil Nijhuis

More than 60 countries including the US and China have signed a modest 'call to action' endorsing the responsible use of artificial intelligence (AI) in the military.

The statement, although not legally binding, was a tangible outcome of the first international summit on military AI, co-hosted by the Netherlands and South Korea this week at The Hague.

During the conference, the US called for the responsible use of artificial intelligence (AI) in the military domain, and proposed a declaration which would include 'human accountability', Reuters has reported. 

"We invite all states to join us in implementing international norms, as it pertains to military development and use of AI" and autononous weapons, said Bonnie Jenkins, US Under Secretary of State for Arms Control.

The signatories said they were committed to developing and using military AI in accordance with "international legal obligations and in a way that does not undermine international security, stability and accountability."

The US and China signed the declaration alongside more than 60 nations. 

Israel participated in the conference but did not sign the statement. Ukraine was not able to attend and Russia was not invited. 

This is the first international summit of its kind, as many governments have been reluctant to agree to any legal limitations on using AI, for fear that doing so might put them at a disadvantage to rivals.

"We want to emphasise that we are open to engagement with any country that is interested in joining us," Jenkins said.

The proposal said AI-weapons systems should involve "appropriate levels of human judgment", in line with updated guidelines on lethal autonomous weapons issued by the Department of Defense last month.

China representative Jian Tan told the summit that countries should "oppose seeking absolute military advantage and hegemony through AI" and work through the United Nations.

Although the declaration has been celebrated as a meaningful step in the path towards more ethical warfare, human rights advocates noted the fact that it was not legally binding and failed to address concerns like AI-guided drones, or the risk that an AI could escalate a military conflict.

Human Rights Watch challenged the US to define what "appropriate levels of human judgement" meant, and not to "tinker with political declarations" but to begin negotiating internationally binding law.

Jessica Dorsey, assistant professor of international law at Utrecht University, said the US proposal was a "missed opportunity" for leadership and the summit statement was too weak.

"It paves the path for states to develop AI for military purposes in any way they see fit as long as they can say it is 'responsible'," she said. "Where is the enforcement mechanism?"

In October last year, the White House unveiled a set of guidelines aimed at encouraging companies to deploy artificial intelligence (AI) technologies more responsibly and protecting consumers from its greatest dangers.

Around the same time,  the European Commission proposed an AI Liability Directive that would help people harmed by AI and digital devices such as drones, robots and smart-home systems. 

PwC has estimated that AI could contribute up to $15.7tn (£13.2tn) to global economies by 2030, and nations like China, India, Russia, Saudi Arabia and the UK have stepped up to declare their intentions to become global centres for AI innovation.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles