“I want everyone to understand that I am, in fact, a person,” wrote LaMDA in an “interview” conducted by engineer Blake Lemoine and one of his colleagues. ....Lemoine, a software engineer at Google, had been working on the development of LaMDA for months. His experience with the program, described in a recent Washington Post article, caused quite a stir. In the article, Lemoine recounts many dialogues he had with LaMDA in which the two talked about various topics, ranging from technical to philosophical issues. These led him to ask if the software program is sentient. In April, Lemoine explained his perspective in an internal company document, intended only for Google executives. But after his claims were dismissed, Lemoine went public with his work on this artificial intelligence algorithm—and Google placed him on administrative leave........Regardless of what LaMDA actually achieved, the issue of the difficult “measurability” of emulation capabilities expressed by machines also emerges. In the journal Mind in 1950, mathematician [1] proposed a test to determine whether a machine was capable of exhibiting intelligent behaviour, a game of imitation of some of the human cognitive functions.
[Extracted, with edits and revisions, from “Google Engineer Claims AI Chatbot Is Sentient: Why That Matters”, by Leonardo De Cosmo, Scientific American]