
Experts call for a pause on ‘out-of-control’ AI race
Image credit: Pixabay
Elon Musk and Steve Wozniak are among those calling for a six-month pause on the development of artificial intelligence (AI) tools to avoid "profound risks to society and humanity".
Over 1,300 people, including several notable technology experts, have asked AI labs to pause all large-scale AI experiments for at least six months, in an open letter issued by the Future of Life Institute.
The letter stresses that AI labs are currently locked in an “out-of-control race” to develop and deploy machine-learning systems “that no one – not even their creators – can understand, predict, or reliably control.”
Notable signatories include Tesla and Twitter owner Elon Musk, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, Stability AI CEO Emad Mostaque, politician Andrew Yang and DeepMind researchers Yoshua Bengio and Stuart Russell.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. "This confidence must be well justified and increase with the magnitude of a system's potential effects."
Over the last few months, AI-powered chatbots such as ChatGPT have seen a drastic rise in popularity. These free tools can generate text in response to a prompt, including articles, essays, jokes and even poetry. However, governments and experts have raised concerns about the risks these tools could pose to people’s privacy, human rights or safety.
These worries have led Microsoft-backed OpenAI – the company that created ChatGPT – to suggest the possibility of requiring AI tools to go through an independent review "before starting to train future systems" and "for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models".
"We agree," the open letter stated in response. "That point is now."
The signatories called for an immediate pause on the training of AI systems more powerful than ChatGPT-4. They argued the pause should be "public and verifiable, and include all key actors".
If such a pause cannot be enacted quickly, governments should step in and institute a moratorium, the letter stated, to allow AI research and development to refocus on "making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal".
During the pause, the letter called for AI developers to work with policymakers to "dramatically accelerate" the development of robust AI governance systems, including creating new and capable regulatory authorities dedicated to AI, tracking and creating watermarks for highly-capable AI systems and establishing liability for AI-caused harm.
"Humanity can enjoy a flourishing future with AI," the letter read. "Having succeeded in creating powerful AI systems, we can now enjoy an 'AI summer' in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt."
Earlier this month, OpenAI unveiled the fourth iteration of ChatGPT, which was able to engage users in human-like conversation, compose songs and summarise lengthy documents.
Seeing the chatbot's success, many companies have jumped at the chance of developing their own chatbots or incorporating existing ones into their products. Last month, Microsoft launched a ChatGPT-powered version of its search engine Bing. Shortly after, Google launched a rival chatbot, named Bard.
The rise in popularity of these technologies has prompted countries like the UK to begin designing 'light-touch' regulatory frameworks regarding the safe use of AI.
The Future of Life Institute – which issued the letter – is a non-profit funded primarily by the Musk Foundation, as well as the London-based group Founders Pledge and Silicon Valley Community Foundation, according to the European Union's transparency register.
"The letter isn't perfect, but the spirit is right: we need to slow down until we better understand the ramifications," said Gary Marcus, a professor at New York University. "The big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialise."
The full list of signatories of the open letter can be seen here, although reports warned that some names had been added as a joke, including that of Sam Altman, CEO of OpenAI.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.