
ChatGPT's CEO calls for AI regulation
Image credit: Dreamstime
Sam Altman, the creator of advanced chatbot ChatGPT has called on US lawmakers to impose licenses for companies that want to develop "increasingly powerful" artificial intelligence (AI) tools.
The head of OpenAI has testified before a US Senate committee about the possibilities and dangers of the new technology that powers ChatGPT.
“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” Altman said.
The tech executive has called on US lawmakers to impose stricter restrictions on artificial intelligence tools, including the creation of a US or global agency that would provide licenses for companies that aim to develop AI tools, and take them away should they refuse to company with safety standards.
"I think if this technology goes wrong, it can go quite wrong...we want to be vocal about that," Altman said. "We want to work with the government to prevent that from happening."
During the hearing, Altman also expressed his concerns that these new technologies could be leveraged to interfere with election processes, saying the possibility of the AI fueling misinformation makes him "nervous".
The 38-year-old tech founder has become a sort of spokesman for the AI sector, and has been very vocal about the ethical questions that come with the development of this technology.
One of these issues surrounded the impact that AI could have on the economy, including the likelihood that AI technology could replace some jobs and lead to layoffs in certain sectors. Altman has said he is "very optimistic" about the new jobs the technology will create, but did not shy away from the change that AI would create in the job market.
"There will be an impact on jobs. We try to be very clear about that," he said, adding that the government will "need to figure out how we want to mitigate that".
The executive was speaking at the Senate, where the government convened top technology CEOs including Altman to discuss how to better explore the benefits of AI tools, while limiting their misuse.
The meeting was prompted by the Biden administration’s concerns over the rapid development of generative AI. Over the last few months, AI-powered chatbots such as OpenAI’s ChatGPT have seen a dramatic rise in popularity. These free tools can generate text in response to a prompt, including articles, essays, jokes and even poetry. A study published in January showed ChatGPT was able to pass a law exam, scoring an overall grade of C+.
The hearing showed there was majority support within both parties for AI regulation, but there was also debate regarding what that regulation would look like.
"AI is no longer fantasy or science fiction. It is real, and its consequences for both good and evil are very clear and present," said Senator Richard Blumenthal, the subcommittee's chair. "We need to maximize the good over the bad. Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment".
Republican Senator Josh Hawley said the technology could be revolutionary, but also compared the new tech to the invention of the "atomic bomb". Meanwhile, Christina Montgomery, International Business Machines Corp chief privacy and trust officer, urged Congress to focus regulation on areas with the potential to do the greatest societal harm.
Earlier this month, US Vice President Kamala Harris summoned the CEOs of US AI giants and told them they have a “moral” responsibility to protect society from the potential dangers of AI, and made it clear that the government is considering drafting legislation that would further regulate these technologies.
Neil Murphy, chief sales officer at the intelligent automation company ABBYY, has encouraged businesses to anticipate these types of regulations, both in the US and the UK.
“The progression of the EU AI Act is necessary for the benefits of AI to be realised in an ethical and sustainable way," he said. “However, at the rate, AI is developing, organisations should continue assessing the risks before deploying new technologies regardless of current regulation, as they could impact critical processes across the organisation.
"We shouldn’t wait for it to pass before considering all the ethical, legal, and business repercussions.”
Jeremy Rafuse, vice president and head of digital workplace at GoTo, added: "The topic of AI is dominating the news agenda. But we’re missing something here when questioning AI: the human touch.
"It is only alongside human expertise that AI and advanced machine learning can run effectively (...) Combining the strengths of AI and human support will not only increase the productivity of IT support but create a ripple effect to improve the efficiency of all organisational operations. In turn, this improves the experience for employees and customers alike, and highlights the critical role IT plays in organisations of all sizes."
Last month, notable technology figures including Elon Musk and Steve Wozniak signed an open letter warning that AI labs were locked in an “out-of-control race” and calling for a six-month pause on all large-scale AI experiments.
The letter was followed by the resignation for AI ‘godfather’ Geoffrey Hinton from his job at Google, warning that “bad actors” will use the new technologies to harm others and that the tools he helped to create could spell the end of humanity.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.