
New regulations will ensure AI is developed ‘safely’, government says
Image credit: Dreamstime
The government has set out its approach to regulating AI in an effort to build public trust in the nascent technology.
The Department for Science, Innovation and Technology (DSIT) said the plan has been designed to help the UK capitalise on the economic benefits of AI, which already contributes £3.7bn to the economy.
Five principles, including safety, transparency and fairness, will guide the use of AI as part of a new national blueprint to be adopted by regulators. However, DSIT also said it wanted to avoid heavy-handed legislation that could “stifle innovation”.
Advances such as the ChatGPT app could improve productivity and help unlock growth, but there are concerns about the risks it could pose to people’s privacy, human rights or safety, the government said.
Britain is currently home to twice as many companies providing AI products and services as any other European country, and hundreds more are created each year.
“Currently, organisations can be held back from using AI to its full potential because a patchwork of legal regimes causes confusion and financial and administrative burdens for businesses trying to comply with rules,” DSIT said.
Instead of giving responsibility for AI governance to a new single regulator, the government will empower existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.
Five clear principles have been outlined that regulators should consider, including safety, transparency, fairness, accountability and contestability.
Over the next 12 months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors.
Science, Innovation and Technology Secretary Michelle Donelan said: “AI has the potential to make Britain a smarter, healthier and happier place to live and work. Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.
“Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.”
The European Union is also attempting to tackle the issue by devising landmark AI laws and create a new AI office. But the speed at which the technology is advancing has complicated its efforts.
Grazia Vittadini, chief technology officer, Rolls-Royce, said: “Both our business and our customers will benefit from agile, context-driven AI regulation. It will enable us to continue to lead the technical and quality assurance innovations for safety-critical industrial AI applications, while remaining compliant with the standards of integrity, responsibility and trust that society demands from AI developers.”
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.