Google Engineer Blake Lemoine (who worked for the company’s Responsible AI unit) joins Emily Chang of Bloomberg Technology in the video below to talk about some of the experiments he conducted that led him to believe that LaMDA (a Large Language Model) was a sentient AI, and to explain why he was placed on administrative leave and ultimately fired. Lemoine said the AI chatbot known as LaMDA claimed to have a soul and expressed human thoughts and emotions. Lemoine provides the dialog (in a contribution he made to Medium) of his experiment with LaMDA. Lemonine’s bio from the Medium article reads: “I’m a software engineer. I’m a priest. I’m a father. I’m a veteran. I’m an ex-convict. I’m an AI researcher. I’m a cajun. I’m whatever I need to be next.” Lemoine said that his claims about LaMDA come from his experience as a “Christian priest.”
Google describes the LaMDA project this way:
LaMDA’s conversational skills have been years in the making. Like many recent language models, including BERT and GPT-3, it’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next. But unlike most other language models, LaMDA was trained on dialogue. During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language. One of those nuances is sensibleness. Basically: Does the response to a given conversational context make sense?
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideBIGDATANOW