
Regulation for the AI-driven new age
Image credit: Ring
Regulation of new tech seems to be falling well behind innovation and the emergence of artificial intelligence will only complicate matters.
The need for new online regulation is recognised globally. Ongoing innovation around artificial intelligence (AI), machine learning, cloud and edge computing, and the Internet of Things are driving calls for strategies that future-proof both social safety and technological innovation. Continuing challenges presented by online hate, fraud, fake news and more are also driving calls for tougher rules.
Alongside this, there is the claim that ‘Big Tech’, the poster child for 21st-century market economics, is in fact now stifling progress. Apart from well-known competition concerns, there is talk of a technological ‘kill zone’ that exists around the major corporations because their ability to dictate the innovation path prevents other smaller players bringing forward new ideas. That power stems not just from an ability to acquire start-ups like Pokemon cards but also their huge engineering resources, which allow them to dominate standards-setting, promote favoured ‘open-source’ technologies and develop APIs and SDKs (application programming interfaces and software development kits) for their ubiquitous platforms so that they effectively act as market gatekeepers.
There’s more. Some of the companies are now thought of as big enough to challenge the authority of governments. Never mind the criticism of Facebook, Twitter and Google in Washington DC, Beijing has recently signalled where the boundaries of ‘Capitalism with Chinese characteristics’ currently lie by cracking down hard on a raft of local internet companies (e.g. Alibaba in e-commerce and Didi-Chuxing in ride-hailing) whose financial and data power put them on a confrontation course with the ruling Communist Party.
Like most of its international counterparts, the UK is in the middle of a regulatory refresh around all things digital. The proposed economic component is among the more liberal, but the legislation moving forward illustrates some of the main general difficulties and how governments are trying to address them.
Specifically, the government said in last May’s Queen’s Speech it would allocate time in the current Parliamentary session for The Product Security and Telecommunication Infrastructure Bill (PSTIB) and bring forward its long-gestating draft Online Safety Bill. It will also introduce the Telecommunications (Security) Bill (TSB) which partly seeks to deal with threats to critical domestic infrastructure.
Beyond that, the Department for Culture, Digital, Media and Sport (DCMS) has opened consultation around its ongoing strategy for digital regulation, promising an “unashamedly pro-tech approach” to the AI era. That sounds good, but it also has its critics – and the strategy does not yet feel like the finished article.
The PSTIB follows two years of consultation and aims to “ensure that smart consumer products, including smartphones and televisions, are more secure against cyber attacks, protecting individual privacy and security”.
Analysts say that it was a good place to start with the regulatory update because it is tangible for both the public and the less tech-savvy legislators. For example, as smart doorbells and health-and-fitness gadgets become more popular (though also attract coverage when the cameras and other data inside them is hacked), explaining why new rules are required to safeguard user data is relatively straightforward.
The overarching idea behind the PSTIB has therefore been welcomed across politics. Labour MP Chi Onwurah, Shadow Minister for Science, Research and Digital, says, “I’m grateful that the issues around smart homes are climbing up the agenda and I’m grateful that the consumer products part is climbing up the agenda too because I have been very worried for a number of years about what we are putting into our children’s toys, our toothbrushes, our fridges – all these things – and that there was a lack of awareness or concern about what could happen.
“One of the issues is that because of the low prices and the very low margins, if you’re making a billion chips for a billion products and security is an extra penny per chip, that is still a huge amount; too many companies are not going to include it. So – and it is a tall order – you need a system that aligns the incentives to do that.”
As with all government bills, the details in the ultimately published version will be critical. But some of what is already known about the PSTIB’s evolution shows Whitehall moving towards the type of monitoring strategies now considered necessary for digital regulation. In particular, its consultation raised the need for regular legislative reviews – typically every two years, almost in lockstep with Moore’s Law. But this example raises another difficult issue: regulatory resources.
‘If you do not give the regulator the people and the money it needs, it doesn’t matter what rules you set or what powers it has.’
“Any of this is only going to be as good as the number of people you put into it,” says Jon Callas, director of technology products at the Electronic Frontier Federation. “If you do not give the regulator the people and the money it needs, it doesn’t matter what rules you set or what powers it has. Because over there, on the industry’s side, they are very well resourced.”
Indeed, although comparable figures are not available for the UK, US and EU data show the sector’s giants already spending big to lobby administrators as new regulations evolve and emerge.
In the first quarter of 2021, Amazon ($4.8m –£3.5m) and Google ($2.7m) set new records for disclosed spending to influence the actions of the US federal government.
According to the most recent data for Brussels, the European operations of Google, Facebook and Microsoft hold the top three positions for lobbying spends, with Apple in sixth place. The leading trio each spends more than €5m.
Most of this currently goes towards commercial lobbying around anti-trust probes and other legislation that may primarily impact on commercial operations, but as the growth of AI sharpens the focus on emerging technologies still further, the balance will shift.
The UK’s Telecommunications (Security) Bill imposes tougher security requirements upon network owners and operators – or they will face fines of up to 10 per cent of turnover or £100,000 a day.
It tackles several issues. Securing the UK’s backbone networks will help protect traffic related to critical national infrastructure as well as that of business and consumers. It will also, in what is already being called ‘The Huawei clause’, “strengthen the security and oversight of technology used in telecoms networks including the electronic equipment and software used across the network.” Then, in a section likely to be tightly scrutinised by civil liberties groups, it will include measures to “ensure that the Government can respond to national security threats within our networks now and in the future”.
For some, that last element may go too far. But another question mark over this bill is whether elsewhere it goes far enough. That speaks to another looming global issue with digital regulation: how to regulate something that is now essentially ubiquitous, but also has a hitherto unseen technological complexity with a system-of-systems that may only be at the beginning of a growth spurt. With that in mind, is it really enough to concentrate on – as so far seems the case with the TSB – predominantly one set of players, the network operators?
The UK government is itself an example of how entangled things have become, as Labour’s Onwurah explains: “There’s a fragmentation of general responsibility in digital. [The Department of Culture, Media and Sport] has some of it, the Cabinet Office has some, parts of the agenda go with [the Department for Business, Energy and Industrial Strategy] and there are various other organisations and quangos. Then, if you add infrastructure and securing that, water’s in [the Department for Environment, Food and Rural Affairs] and energy’s in BEIS as well.”
Yet a further component in the UK – and one that speaks directly to the network security objectives of the TSB – is that much of the infrastructure has been privatised. “And a lot of what happens is black-boxed, not just when it comes to the companies that own the utilities but also when it comes to the relationships between them and their subcontractors,” Onwurah adds.
Integrated oversight of increasingly integrated networks looks like a necessary step. Onwurah says that could partly be addressed by creating a role such as Chief Scientist, analogous in some ways to the White House Chief Technology Officer during the Obama administration.
The US has also discussed the idea of an overarching Federal Agency dedicated to digital issues but also – and inevitably – the increasing incoming impact of AI and the applications and infrastructure being built around it.
This would be a huge political challenge, raising the prospect of the kind of civil service turf wars that have long dogged governments the world over. But even for those who might be prepared to grasp this nettle, other immediate concerns arguably stand in the way.
Maria Zervaki, a manager with the Access Partnership, a leading public policy consultancy, notes: “So you can say, ‘Let’s have a federal AI agency that can look at the implications for security, for privacy, what is happening in inference for machine learning and so on. You immediately have the problem of funding and, with Covid-19, I’m not sure that governments worldwide would be able to focus on that even if it is the correct policy approach.”
Again, the issue comes back largely to the cost of delivering even on those measures that are put into legislation.
‘I don’t see how you can come up with a list [of principles for AI regulation] and not have privacy, safety and security on it.’
In early July, the DCMS published ‘Digital Regulation: Driving Growth and Unlocking Innovation’. Like its earlier consultation around consumer IoT security, it is partly consultative but does set out how the UK government wants to address the next wave of digital economic growth.
It follows a “principles-led approach” being adopted elsewhere. The thinking behind this approach is that since AI is still relatively immature, it is better to define the world within which you want it to exist and the benefits you want from it, rather than trying to regulate the technology itself. For now, there are simply too many ‘unknown unknowns’ to set the boundaries.
The problem is that the UK’s chosen principles feel somewhat removed from the main challenges it has itself identified. They are:
- Actively promote innovation.
- Achieve forward-looking and coherent outcomes.
- Exploit opportunities and address challenges in the international arena.
As economic and R&D priorities, these make sense, but even a member of the government’s Industrial Strategy Council, which was closed down earlier this year, was surprised by the emphasis. “They’re good for pushing the tech sector forward,” he says, “but they lack the whole-of-society component if this is about regulation. ‘First duty of government’ and all that.”
Labour’s Onwurah is more direct. “I don’t see how you can come up with a list like that and not have privacy, safety and security on it.”
The omissions may be tempered somewhat by other references in the paper to keeping the UK “safe and secure” online, and the fact that the government can point to the legislation it is bringing forward. But with so much innovation to come – and with some of that already having escaped into public use with serious flaws (e.g. facial recognition systems) – these priorities do not look likely to fall off the agenda.
Maria Zervaki raises a related and yet broader point as to how principles-based regulation is likely to work best. “But when we’re speaking about this kind of emerging technology, like AI, like machine learning, like IoT, I think we should look at it as not being about regulating the technology, but the use of the technology. That’s something that sometimes legislators don’t really understand.”
The paper first emerged almost unremarked in early July when Culture Secretary Oliver Dowden and much of the country were occupied elsewhere with either the future of the Covid-19 restrictions or football. With the consultation running until late September, though, there is the growing feeling that it needs a bit of a rethink.
In many ways, the UK is behind other authorities in advancing the new digital regulatory agenda. The EU, for example, has already published its Artificial Intelligence act. But that raises many of the same questions.
There is the problem of integration. According to a paper by University College London’s Michael Veale and Radboud University’s Frederik Zuiderveen Borgesius, it needs to dovetail with at least five other technology-centric acts and regulations in draft and – perhaps a bridge the UK is yet to cross – has weaknesses in terms of how it relates to existing laws in areas such as fraud. Integration and complexity again.
Meanwhile, the cost of enforcement and – even if you do admit privacy and security – where the boundaries should go continue to tax legislators. The problem though is that everybody is arguably behind the curve, with most experienced observers remarking that these processes should have begun five, maybe even 10 years ago.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.