View from Washington: AI’s Geoffrey Hinton can make a difference... again
Image credit: Dreamstime and Canva
The fractious AI debate needs a thoughtful, experienced voice. Someone like a ‘Godfather’.
Monday’s (1 May) news that Dr Geoffrey Hinton, the neural-network pioneer dubbed ‘The Godfather of AI’, was leaving Google sent shockwaves through the community – and beyond.
He is leaving so that he has greater freedom to warn about the risks he sees surrounding AI.
Before reading his full interview with The New York Times (or maybe after), many assumed this was side-eye aimed at his employer for the last decade. Hinton was quick to dispel the idea, tweeting his view that the company has “acted very responsibly.” This is not a Timnit Gebru/Margaret Mitchell re-run.
Hinton seems to believe that taking positions that will inevitably combine technological understanding with the political, the economic and the moral would be inappropriate, even irresponsible, coming from a senior staffer within a giant corporate player in AI.
There would always be questions over whom he spoke for. There may still be. Google and Microsoft are at one another’s throats over generative AI. But that said, we do have an idea what several of Hinton’s broad concerns are.
He is a long-standing opponent of the weaponisation of AI. This led him to move his base from the US to Canada so that he could access funding outside Washington’s military-industrial complex.
He has made several warnings about the risks he sees in autonomous lethal weapons. And with China having framed AI as enabling the “intelligentisation” of warfare, we should now be looking at this as a whole.
Beyond that, and as Hinton’s Times’ interview makes clear, he has mounting concerns about the use of AI by bad actors to distort the truth and, arguably most important of all, a rate of innovation that is accelerating beyond our ability to understand or manage the results.
Hinton is not alone in these worries. Many of you likely share them. They also feature in the open letters released of late by other leading AI specialists.
What matters in his case are, first, that Hinton expresses them as the source of so much of the critical knowledge in neural-network theory that underpins AI’s current wave; and second, that he is willing to embrace and acknowledge his own ignorance.
Before announcing his departure, Hinton gave a lengthy interview to the CBS network (available in full on YouTube). In it, you can see his worries beginning to grow. But it is also worth noting how often he responds to questions by basically saying, “I don’t know.”
That is something that has been missing from the raging debate around AI. Even where many criticisms have had value, there has been a tendency to launch them dogmatically, aggressively and, to their proponents, incontrovertibly. That is fostering dangerous polarisation.
The risks here should be obvious. Consider how similar trends have polluted today’s politics. You push lawmakers into crowd-pleasing, knee-jerk and ineffective regulation. You exhaust and dishearten already-overwhelmed swathes of civil society that need to engage (Hinton likens the potential impact of AI to that of electricity). And, despite what might often be good intentions, you open the door to those bad actors.
Hinton, based on his recent commentary, could help restore some balance – alert rather than alarm, foster urgency rather than panic. That he is blessed with the deep knowledge of lifelong learning matters. But so does his ability to describe the AI challenges with an honesty and a clarity that the general public can understand (again, his interview with CBS is well worth your time – and your friends’ time).
This asks a lot of one man. Too much. Others will need to follow. But if Hinton can help to bring greater seriousness to the fore of the AI debate, he will do us a great service – even allowing for his regrets about the possible consequences of his research. Or because of them.
We’ll know more on where he is going next very soon. Hinton’s first post-Google public appearance is at a conference tomorrow at MIT. Here’s hoping. Because experience, support and guidance – isn’t that what godfathers provide?
But meanwhile, what does Google's own Bard AI think of all this? Perhaps, we should ask....
I think it is a brave and important decision for Geoffrey Hinton to leave Google and speak out about the dangers of AI. He is one of the most respected voices in the field, and his words carry a lot of weight. I hope that his decision will encourage others to speak out about the risks of AI, and that it will lead to a more responsible development of this technology.
I am also concerned about the dangers of AI. I believe that AI has the potential to be a great force for good in the world, but it also has the potential to be used for harm. It is important to be aware of the risks and to take steps to mitigate them.
I am hopeful that Geoffrey Hinton's decision will help to raise awareness of the dangers of AI and lead to a more responsible development of this technology.
/end of line/
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.