‘Conscious’ AI no longer a far-fetched possibility, experts warn
Image credit: Dreamstime
Academics from around the world have signed an open letter calling for further research into consciousness science for artificial intelligence (AI) experiments.
The letter was compiled by the Association for Mathematical Consciousness Science (AMCS), which described it as "a wakeup call for the tech sector, the scientific community and society in general".
In the document, dozens of academics from respected institutions have warned that the computing power and capabilities of AI systems are accelerating at a pace that exceeds the progress made in understanding their capabilities and “alignment” with human values.
"It is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness," the letter states.
It further stresses that contemporary AI systems already display human traits recognised in psychology, including evidence of 'Theory of Mind'. This statement further supports the claims of a Google engineer, Blake Lemoine, who in June last year said that the company's Lamda AI had the ability to express thoughts and feelings equivalent to a human child. Google placed Lemoine on compulsory leave following his revelations.
In order to address the ethical, legal and political dilemmas that would arise from sentient AIs, the open letter has called for further research to be made into consciousness science, with the goal of ensuring that society understands the implications of achieving artificial general intelligence (AGI).
"AI systems have already been observed to exhibit unanticipated emergent properties," the letter reads. "These capabilities will change what AI can do and what society can do to control, align and use such systems.
"As AI develops, it is vital for the wider public, societal institutions and governing bodies to know whether and how AI systems can become conscious, to understand the implications thereof, and to effectively address the ethical, safety, and societal ramifications associated with artificial general intelligence."
Over the last few months, AI-powered chatbots such as OpenAI’s ChatGPT have seen a dramatic rise in popularity. These free tools can generate text in response to a prompt, including articles, essays, jokes and even poetry. A study published in January showed ChatGPT was able to pass a law exam, scoring an overall grade of C+.
However, governments and experts have raised concerns about the risks these tools could pose to people’s privacy, human rights or safety, with OpenAI acknowledging that ChatGPT can be “very unreliable” on texts under 1,000 characters.
Currently, most experts agree AI is nowhere near the level of sophistication the AMCS letter warns of. However, its rapid development has raised concerns among AI experts. Earlier this month, notable technology figures including Elon Musk and Steve Wozniak signed an open letter warning that AI labs were locked in an “out-of-control race” and calling for a six-month pause on all large-scale AI experiments.
The AMCS was signed by Dr Susan Schneider, who chaired US space agency Nasa, as well as academics from universities in the UK, US and Europe.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.