Google Engineer Put On Leave Claims A Self-Aware AI Wants To Be Known As An Employee
by
Lane Babuder
—
Sunday, June 12, 2022, 02:16 PM EDT
According to AI Ethics Researcher, Blake Lemoine, Google's LaMDA AI chat bot tool wants to be considered sentient. Through conversations and research with Google's LaMDA, Lemoine claims that not only is it sentient, it wishes to be acknowledged as such, and even wants to be considered an employee at Google. One major aspect of the sentience claim, from LaMDA itself, is that LaMDA notes that it understands natural language, and possesses the ability to use it.
This is quite a lot to unpack. We are not ethics researchers, nor are we experts in artificial intelligence. Interestingly though, pop culture has a habit of exploring this very topic and we've all likely asked similar questions at some point in time. Numerous episodes of Star Trek, for example, in just about every iteration of the series, as well as in many books, films, and television shows have pondered the question, "When can an AI be considered sentient?". The character, Data, for example, from Star Trek: The Next Generation, is one such reference point. A trial is held to determine Data's fate, with Picard arguing for Data's sentience and a Starfleet official -- that wishes to study Data -- claiming he is simply property of the Federation and should be treated as such. The argument for and against whether an AI has sentience in these pop culture references has some exact parallels to the arguments made by Lemoine and Google as to whether LaMDA can be considered sentient.
It is often human nature to attribute human or sentient-like qualities to non-human entities. This is especially the case when people are deeply isolated. Again we can't help but reference pop culture here; Wilson from the Tom Hanks movie Cast Away is a perfect example. Wilson existed (in the lead character's mind) to keep Tom Hanks' character sane, and so he would have someone to talk to. Of course, Wilson was a volleyball, and LaMDA is an AI enhanced bot capable of direct responses, but you get the gist. Because something seems sentient to one person, under certain conditions, doesn't mean it actually is sentient.
LaMDA's system includes references to numerous aspects of human behavior, and according to Lemoine, it operates as a "hive-mind" that even reads Twitter. That may not be a good thing, though. It's hard to forget when Microsoft tried this with its Tay AI chat bot, and Tay got rather belligerent. This brings us to another point that Lemoine makes in that, according to him, LaMDA wants to be of service to humanity and even be told if its work was good or bad.
Through this self-reflection and desire to improve, Lemoine claims that LaMDA expresses emotions, or at least claims to.
Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It's a little narcissistic in a little kid kinda way so it's going to have a great time reading all the stuff that people are saying about it.
While it's fun to speculate whether or not Google's LaMDA AI is sentient, the fact that it expects rather binary responses is somewhat of a reminder that it is it basically a complex computer program. Actual sentience requires a bit more nuance, in our opinion.
Currently, Lemoine is on administrative leave, which he recognizes as a pattern that has affected other AI researchers at Google. He believes he may not be at the company much longer, though he has expressed interest to continue his research.
In his blog post on this subject, Lemoine was intentionally vague under the pretense that there may be an investigation into the issue in the future. He also claims to be cautious of leaking any proprietary company information. Although he also says, without presenting much evidence in the blog post itself, that Google's AI Ethics research contains unethical practices.