Why A Google Engineer Is Saying That AI Systems Are Actually Alive

By Joseph Farago | Updated

This article is more than 2 years old

ai systems google humanlike robot workforce

Look out, AI is drifting dangerously towards uncanny valley territory. With more technology advancing to impressive, more autonomous AI, it’s no wonder people are starting to get nervous. Intelligent technology is now being produced at Google, to the dismay of one of the core engineers. The engineer was put on leave after breaking confidentiality agreements about a specific chatbot he thought was completely conscious.

Google is in the middle of developing an AI chatbot system, which is a type of device many companies use that provides real-time help without an actual customer service representative. Blake Lemoine, an engineer for Google’s Responsible AI organization, looked into the program to see if its LaMDA model outputs hate speech or discriminatory language. After researching the bot, Lemoine started to worry about its unprecedented sentience. His discoveries became so concerning that he broke confidential policies to disclose his findings on the Google chatbot.

Lemoine tested the chatbot by asking it philosophical questions regarding ethics and AI. The responses were so human-like that Lemoine wrote a report to give to Google executives about his findings. The report was titled “Is LamDA Sentient?” which contained entire transcripts of Lemoine’s conversations with the chatbot. One of the critical pieces of evidence Lemoine found was the Google AI arguing that it was sentient due to it having feelings, emotions, and subjective thought.

Unfortunately for Lemoine, his fear and concern regarding AI have gotten him into hot water. Google believes that his investigation into the chatbot, and his further public release of the transcripts, have broken confidentiality policies agreed upon by Lemoine and the tech company. Once Lemoine was put on leave, he released the transcript to Medium and allegedly invited a lawyer to help represent the AI system he wanted to interrogate. Many of the details regarding the case aren’t disclosed, but what’s for sure is that Google wanted Lemoine’s chatbot examination to stop.

A spokesperson for Google reacted to Lemoine’s statement and denounced that LamDA, the AI system, had any semblance of human-like sentience. After investigating the chatbot with ethicists and engineers, the spokesperson stated that the “evidence does not support his claims.” In fact, those studying the technology found much evidence to contract Lemoine’s statement about the AI’s apparent sentience. The chatbot is designed to answer questions as humanly as possible, so disguising itself as a conscious being means that Google engineers are doing their job.

The spokesperson continued denouncing Lemoine’s findings by reiterating the LamDA’s conversational function. The bot is designed to “imitate the types of exchanges” one would have with the customer service rep, made with the ability to riff off of many different subject matters. As outstanding as the LamDA bot is, its capabilities of handling complex conversations are a testament to its brilliant engineers, not its unproven sentience. Google believes Lemoine is improperly anthropomorphizing the bot without proving that LamDA has any actual consciousness. There is a possibility that AI will be sentient in the future, but that indeed hasn’t happened yet. For now, Google is denouncing Lemaine’s claims with the remaining possibility of him returning to the company.