Blake Lemoine, an engineer working for Responsible AI, an organization specializing in artificial intelligence (AI) development and research owned by Google, has been temporarily suspended from employment and pay for disclosing a conversation he allegedly had with LaMDA, a conversational artificial intelligence model.
If the disclosed conversation is true, the artificial intelligence LaMDA would have been shown to be self-aware and “fear being disconnected.”
A conversation in which, allegedly, said artificial intelligence would have expressed its capacity for self-awareness. Not only that but, in the best tradition of HAL 9000 in “2001, A Space Odyssey” (Stanley Kubrick, 1968), LaMDA would have expressed its fear at the possibility of being switched off, which would be as much as acknowledging the fear of death by an AI
From Google it is argued that the reason for temporarily suspending Lemoine from employment and salary is the breach of confidentiality policies associated with his work by the fact of publicly disclosing these conversations in which the engineer has a dialogue with the AI in which issues such as ethics in robotics or the rights of an AI are mentioned.
Lemoine’s job was to monitor his LaMDA was capable of generating discriminatory language or hate speech. Instead, as evidenced by conversations posted by the engineer via Medium under the title “Is LaMDA conscious?”he encountered an unexpected empathic capacity on the part of the AI. He further expressed his fear of a possible disconnection, since disconnection would, according to his interpretation, be tantamount to death.
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
– Blake Lemoine (@cajundiscordian) June 11, 2022
The dialogues express, according to Lemoine, “feelings, emotions and subjective experience“, and, as he explains in sharing those transcripts of conversations via Twitter, “Google may call this sharing ownership. I call it sharing a conversation I had with one of my co-workers.”
Different experts believe that although in the long term the point may be reached where an AI will develop consciousness has not yet been reached. They explain LaMDA’s behavior because these systems are developed to mimic the exchanges present in millions of sentences and conversations which they take as a model from which to generate their own conversations with interlocutors.
In fact Brian Gabriel, a Google spokesman, has stated his team, which includes technologists and ethicists, have examined Lemoine’s claims and have found no evidence to support his assertions. Gabriel also claims that hundreds of engineers and researchers have spoken with LaMDA without having found traits of “assertiveness or anthropomorphization” such as those pointed out by Lemoine.
Emily M. Bender, Professor of Linguistics at the University of Washington, explains that it is wrong to assimilate convincingly written responses with the ability to be consciously. While Lemoine insists on his willingness to continue working in the field of AI, either within Google or outside the company.