
UK competition regulator to review AI market
Image credit: Canva and pixabay
The UK's competition watchdog has announced it will examine competition rules and consumer protection standards within the artificial intelligence (AI) market.
The Competition and Markets Authority (CMA) has launched an official review into foundational AI models, used to train and support popular tools such as ChatGPT.
The investigation aims to "ensure that innovation in AI continues in a way that benefits consumers, businesses and the UK economy", according to the regulator. It is expected to assess the rapid rise of foundational models, and recommend principles that would continue to support this growth in a way that does not harm consumers.
The announcement comes amid growing concerns over the rapid development of generative AI. Over the last few months, AI-powered chatbots such as OpenAI’s ChatGPT have seen a dramatic rise in popularity. These free tools can generate text in response to a prompt, including articles, essays, jokes and even poetry. A study published in January showed ChatGPT was able to pass a law exam, scoring an overall grade of C+.
"AI has burst into the public consciousness over the past few months but has been on our radar for some time," said Sarah Cardell, chief executive of the CMA. "It’s a technology developing at speed and has the potential to transform the way businesses compete as well as drive substantial economic growth.
"It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information. Our goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets and effective consumer protection."
The news of a CMA review also follows a session in the House of Commons where MPs warned against the risk of the unregulated spread of these technologies. Labour’s Darren Jones said people are “rightly worried” about how it could affect the political process, while former Tory minister Tim Loughton told the Commons the explosion of AI could pose the same level of “moral dilemma” as advances in medical technology.
“When advances in medical technology around genetic engineering, for example, raise sensitive issues, we have debates on medical ethics, we adapt legislation and put in place robust regulation and oversight," he said. "And the explosion in AI potentially poses the same level of moral dilemma and it is open to criminal use, for fraud, impersonation and by malign players such as the Chinese government for example."
Science secretary Chloe Smith said the government recognises that “many technologies can pose a risk when in the wrong hands”, adding: “The UK is a global leader in AI, with the strategic advantage that places us at the forefront of these developments.
“Now, through UK leadership, including at the OECD and the G7, the Council of Europe and more, we are promoting our vision for a global ecosystem that balances innovation and the use of AI underpinned by our shared values, of course, of freedom, fairness and democracy. Our approach will be proportionate, pro-innovative and adaptable.”
The CMA’s work on AI also follows debates within the international community regarding the fair use of the technology. Earlier this week, AI 'godfather' Geoffrey Hinton resigned from his job at Google, warning that “bad actors” will use the new technologies to harm others and that the tools he helped to create could spell the end of humanity.
Moreover, the US Federal Trade Commission has recently alerted the industry this week, saying it was “focusing intensely” on how the technology is being used by firms and the impact it may have on consumers.
The CMA said with many of the important issues under the spotlight due to the development of AI being considered by the government and other regulators, its study will focus on the implications of competition for firms and consumer protection.
Earlier this month, notable technology figures including Elon Musk and Steve Wozniak signed an open letter warning that AI labs were locked in an “out-of-control race” and calling for a six-month pause on all large-scale AI experiments.
The watchdog has set a deadline for views and evidence to be submitted by 2 June, with plans to report its findings in September.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.