UN Security Council in New York, 2022

UN officials call for AI regulation during Security Council meeting

Image credit: Shutterstock

The United Nations (UN) has held its first meeting on the risks of artificial intelligence (AI), while 1,300 experts sign an open letter to vouch for the technology

The meeting of the Security Council saw representatives of the 15 member countries listen to AI experts discussing whether AI was a ‘catastrophic risk for humans’ or a ‘historic opportunity’. 

The meeting was held in London and chaired by foreign secretary James Cleverly, who said AI “knows no borders” and stressed the urgent “need to shape the global governance of transformative technologies”. 

"No country will be untouched by AI, so we must involve and engage the widest coalition of international actors from all sectors," Cleverly said. "Our shared goal will be to consider the risks of AI and decide how they can be reduced through coordinated action."

During its session, the chamber heard from Jack Clark, co-founder of leading AI company Anthropic, and professor Zeng Yi, co-director of the China-UK Research Center for AI Ethics and Governance. Both of them stressed AI’s potential, while also giving their support for regulatory efforts. 

"We cannot leave the development of artificial intelligence solely to private sector actors," Clark said. "Governments of the world must come together, develop state capacity, and make further development of powerful AI systems."

He warned of the risks of not understanding the technology, saying it would be akin to "building engines without understanding the science of combustion.”

Meanwhile, Yi believed the UN “must play a central role to set up a framework on AI for development and governance to ensure global peace and security", arguing that “AI risks human extinctions simply because we haven't found a way to protect ourselves from AI's utilisation on human weaknesses.”

Government officials also took the floor to defend or warn against AI. While China stressed that AI should not become a “runaway horse”, the United States  warned against its use to censor or repress people.

"Whether it is good or bad, good or evil, depends on how mankind utilises it, regulates it and how we balance scientific development with security," said China's UN ambassador Zhang Jun. 

In order to address these risk, secretary general António Guterres called for the establishment of  a global watchdog modelled after the International Atomic Energy Agency and the Intergovernmental Panel on Climate Change that will “support collective efforts to govern this extraordinary technology”. 

Russia, however, questioned whether the Council should be discussing AI in the first place, and doubted the need for governmental involvement in the technology’s development. 

"What is necessary is a professional, scientific, expertise-based discussion that can take several years and this discussion is already underway at specialised platforms," said Russia's Deputy UN Ambassador Dmitry Polyanskiy.

The conference took place on the same day as the BCS, the Chartered Institute for IT published an open letter aiming to counter “AI doom”. In the document, 1,300 AI experts highlight the technology’s potential as a “force for good”.

“Frankly, this notion that AI is an existential threat to humanity is too far-fetched,” said Rashik Parmar, BCS chief executive and one of the letter’s signatories. “We're just not in any kind of a position where that's even feasible.”

During a visit to the US last month, Sunak claimed the UK was the “natural place” to lead the conversation on AI and announced that Britain will host the first major global summit on AI safety this autumn. 

The country has already taken steps towards developing ‘light touch’ regulatory frameworks regarding the safe use of AI. This includes the creation of a £100m Foundation Model Taskforce, modelled after the Covid-19 Vaccine Taskforce, which will focus on the research and development of “safe and reliable” foundational models, a type of AI used by chatbots such as ChatGPT

The AI sector contributes £3.7bn to the UK economy and employs 50,000 people across the country, according to official figures. 

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles