AI concept

Google engineer claims AI system has developed feelings

Image credit: Foto 132826092 © Blackboard373 | Dreamstime.com

Google employee Blake Lemoine has been put on paid leave after claiming that Lamda AI - an artificial intelligence chatbot - had become sentient.

Google employee Lemoine had requested respect towards one of the firm’s artificial intelligence tools after reportedly finding that the system had the perception of, and the ability to express thoughts and feelings equivalent to, a human child.

Google has denied all claims that Lamda AI has become sentient and has subsequently placed Lemoine on paid leave.

The firm states that the 'Language Model for Dialogue Applications' (Lamda) is a breakthrough technology that can engage in free-flowing conversations. Lemoine was working on the model, testing the AI’s ability to generate discriminatory language or hate speech. However, the tool’s impressive verbal skills led the scientist to believe it had developed a sentient mind.

To support his claims, Lemoine shared a document with company executives containing a transcript of his conversations with the AI. After his concerns were dismissed, the scientist decided to publish the transcript via his Medium account, in which the tool gave convincing responses regarding the rights and ethics of robotics.

In the transcript, Lemoine asks Lamda AI whether it is sentient. The algorithm replies affirmatively, saying: “I want everyone to understand that I am, in fact, a person.”

Lemoine, 41, told The Washington Post: “If I didn’t know exactly what it was - which is this computer program we built recently - I’d think it was a seven-year-old, eight-year-old kid that happens to know physics.”

Google responded by stating that Lemoine’s actions had violated its confidentiality policies. It also denied all notion that the Lamda AI is sentient. Google also revealed that Lemoine had attempted to hire a lawyer to defend the rights of the AI.

“Our team - including ethicists and technologists - has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that Lamda was sentient (and lots of evidence against it),” said Google spokesperson Brian Gabriel.

The idea that AI could one day become sentient has been the subject of many fictional products and has initiated many debates among philosophers, psychologists and computer scientists.

However, the scientific community has overwhelmingly denied the idea that a system like Lamda could develop feelings.

Scientists such as Professor Melanie Mitcher, who studies AI at the Santa Fe Institute, have described Lemoine’s claims as part of a process of anthropomorphism, in which people project human feelings on to words generated by computer code and large databases of language.

In a tweet, she wrote: "Such a strange article. It's been known for *forever* that humans are predisposed to anthropomorphize even with only the shallowest of signals (cf. ELIZA). Google engineers are human too, and not immune."

Meanwhile, Prof Erik Brynjolfsson, of Stanford University, tweeted that these claims are "the modern equivalent of the dog who heard a voice from a gramophone and thought his master was inside".

The search giant announced Lamda at Google I/O last year as a model that will improve its conversational AI assistants and make for more natural conversations. The company already uses similar language model technology for Gmail’s Smart Compose feature and has said that hundreds of researchers and engineers have already conversed with Lamda, with no one else making claims comparable to those of Lemoine.

Despite his concerns, Lemoine said he intends to continue working on AI in the future. “My intention is to stay in AI whether Google keeps me on or not,” he wrote in a tweet.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles