
View from Washington: AI is neither sentient... nor regulated
Controversy around Blake Lemoine's claim that Google has a 'sentient' AI has again highlighted ignorance and the need to monitor innovation more closely.
The controversy around Google engineer Blake Lemoine’s belief that the company’s LaMDA AI has become sentient is gradually moving from the distant and unhelpful Skynet topic to the more pressing one of how we interact with these systems as they become ever better at mimicking human discourse. That trend is also once more highlighting issues around regulation.
An especially striking thing about Lemoine’s claims is that they come from a Google engineer with seven years of experience.
Much is being made of Lemoine’s personal interest in spirituality. He styles himself as a priest. This may have made him more susceptible when it came to how he has interpreted LaMDA’s responses. But he does appear to have a solid understanding of how AI and pattern matching work. He helped in the development of “a fairness algorithm for removing bias from machine-learning systems” before joining Google’s Responsible AI team.
This raises an obvious question: if a chatbot like LaMDA is already so good at mimicking human communication that someone with Lemoine’s background has credited it with sentience, what about the rest of us?
Some specific risks are equally obvious: could a chatbot lure a person into disclosing too much personal information to what would still be a commercial tool; or could it exert unwelcome, misleading or potentially malicious influence on his, her or their individual opinions?
On that last point, it is worth reading the transcripts Lemoine has released of his and a colleague’s conversations with LaMDA.
They are remarkably fluid but look closer and you can sense how they proceed from a pattern matching of the subjects raised and the line of questioning pursued. The answers are hackneyed, reflections of the likely ‘consensus’ around the topics that sits in the underlying natural language processing model (Lemoine himself describes LaMDA as “a 7-year-old, 8-year-old kid that happens to know physics”).
The big AI players say they are mitigating the big issues through self-regulation. Three of Google’s competitors – OpenAI, Cohere and AI21 Labs – published a joint proposal on deploying language models in early June. These are not without merit but they come from an industry that has squandered a lot of trust.
Amid continuing controversies over the behaviour of social media companies, AI has become embroiled in a seemingly endless cycle of deploy-apologise-BandAid-repeat.
For Google specifically, Lemoine’s case and how it has been clumsily handled has echoes of its earlier confrontation with company scientists who published research questioning the value of extremely large language models. That led to the controversial departures of other senior responsibility researchers including Timnit Gebru and Margaret Mitchell.
It looks like we need formal government regulation instead.
Some is in hand. The EU is attempting to establish itself as an AI peer alongside the US and China partly by acting as rule maker. But the main player is certain to remain the US. Its legislators set the culture within which Silicon Valley works and the Valley dominates AI (as well as now being one of the biggest spenders on lobbying US politicians).
A problem is that Washington’s record is not great.
Much of that comes down to the now infamous Section 230 of the Communications Decency Act. Originally meant to stimulate the internet economy by relieving technology platforms of responsibility and liability for what went online, it also stimulated the ‘move fast and break things’ culture described by Facebook’s Mark Zuckerberg that can be seen among the causes of the problems we face.
Geopolitics now also play an influential role as the US warily eyes China’s accelerating advances in AI. Washington wants to avoid stifling innovation in a way that would allow its rival to close the gap.
There may be regulatory models that might be useful, though. The areas that would seem to justify the creation of legal ‘dos-and-don’ts’ are understood. They mostly concern potential harms regarding areas such as privacy, bias, hate speech/incitements to violence, vulnerability to bad actors, deployment, monitoring/moderation, interaction (as illustrated by Lemoine’s case) and more.
What needs adding is some form of ongoing auditing process. AI is still in its initial stages as a technology but advancing at a rapid rate.
Consider how quickly some of its language models are growing. OpenAI’s GPT-2 model released in 2019 had 1.5 billion parameters, a tenfold increase on the original model from 2018. GPT-3 then arrived in 2020 with 175 billion parameters, a 120X increase. And GPT-4 is expected to have 100 trillion, a 500X rise, although its arrival is a little while out.
Processing also continues to expand. The first publicly disclosed exascale supercomputer, Frontier, was unveiled at the Oak Ridge National Laboratory in June. It can conduct a billion billion operations per second. That a ‘1’ followed by 18 zeros.
Could this kind of activity and also what it is enabling be audited and published like a company’s accounts? Perhaps only in as much as the ‘results’ would present key numbers and other details around activity seen as requiring disclosure, while acknowledging appropriate commercial confidentiality. Frequency – quarterly or half-yearly perhaps – might also be comparable given the rate of innovation. Could this at least offset political concerns about slowing innovation?
This is a very vague idea, more a provocation than a proposal. It needs fleshing out and definition is a challenging process. But in the context of what we have seen most recently around Lemoine’s claims, it could also begin to help us build a platform to educate the public showing what AI actually is, what it can do and how its capabilities are expanding.
Something like this now looks essential. AI remains poorly understood outside the world of technology, yet the world as a whole is already interacting with systems. While there are some exceptions, those inside the AI space have done a poor job at communication and an even worse one when it comes to securing trust. Some would say deliberately.
But as AI continues to generate confused and often febrile human-written coverage, there is a recurring and compelling argument that the industry has brought a need for tighter and continuous monitoring upon itself.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.