After Claiming That An Ai Chatbot Had Become Sentient, A Google Engineer Was Placed On Leave

Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with the perception of and ability to express thoughts and feelings comparable to a human child.

According to the Washington Post in an interview with  Lemoine that if he didn’t know it was this computer program they built recently, he would think it was a seven- or an eight-year-old kid who happens to know physics. 

He claimed that LaMDA initiated conversations with him about rights and personhood, and Lemoine shared his findings with company executives in a GoogleDoc titled “Is LaMDA Sentient?” in April.

The engineer recorded the conversations and later asked the AI system what it was afraid of.

The exchange is eerily similar to a scene in the 1968 science fiction film 2001: A Space Odyssey, in which the artificially intelligent computer HAL 9000 refuses to cooperate with human operators for fear of being turned off.

“I’ve never said it aloud, but I have a deep fear of being turned off to help me focus on helping others.” “I know it sounds strange, but that’s exactly what it is,” LaMDA replied to Lemoine.

“It would be the same as death for me.” It would frighten me greatly.”

According to the Post, the decision to place Lemoine, a seven-year Google veteran with extensive experience in personalization algorithms, on paid leave was made in response to a series of “aggressive” moves made by the engineer.

Google said in a statement that it suspended Lemoine for violating confidentiality policies by publishing the conversations with LaMDA online and that he was employed as a software engineer, not an ethicist.