Blake Lemoine, Google Engineer he openly claimed the company’s LaMDA conversational AI was fired for being vulnerable. Big Tech newsletter, spoke to Lemoine. In June, Google placed Lemoine on paid administrative leave for violating a nondisclosure agreement after contacting government officials about his concerns and hiring an attorney to represent LaMDA.
A statement to the email address The Verge “We wish Blake well,” Google spokesman Brian Gabriel said Friday, confirming the firing. The company also says, “LaMDA has gone through 11 different reviews, and we published a research paper earlier this year detailing the work underway to develop it responsibly.” Google claims it has “extensively” reviewed Lemoine’s claims and found them to be “completely without merit.”
It adapts numerous AI experts and ethicists, who say his claims are more or less impossible given today’s technology. Lemoine claims that his conversations with LaMDA’s chatbot lead him to believe that it has become more than just a program and has thoughts and feelings of its own.
He claims that before experimenting on Google researchers, they must get approval from LaMDA (Lemoine himself was tasked with testing whether AI can create hate speech), and he has published excerpts of those conversations on his Medium account as evidence.
Computerphile is on YouTube a decently accessible nine-minute explanation About how LaMDA works and how it can generate answers that convince Lemoine without actually being sensitive.
Here’s Google’s full statement, which also addresses Lemoine’s accusation that the company didn’t properly investigate his allegations:
As we share Principles of AI, we take AI development very seriously and are committed to responsible innovation. LaMDA went through 11 different reviews and we got one research case earlier this year detailed the work that goes into its responsible development. If an employee like Blake shares concerns about our work, we consider them extensively. We found Blake’s claims about LaMDA to be completely unfounded and spent months working with him to clarify this. These discussions were part of an open culture that helped us innovate responsibly. Unfortunately, despite long engagement on this topic, Blake still chose to persistently violate its clear employment and data security policy, which includes the need to protect product data. We will continue to carefully develop the language models and we wish Blake the best of luck.
Leave a Comment