Blake Lemoine: Google fired the engineer who claimed an AI program became self-aware

  • Tiffany Wertheimer
  • BBC News

image source, The Washington Post/Getty Images


Blake Lemoine worked on the responsible AI team at Google.

Google fired an engineer who said one of the company’s artificial intelligence (AI) software was showing emotion.

Last month, in an article on Medium, Blake Lemoine announces his theory that Google’s language technology is “conscious” Therefore, his “wishes” should be respected.

Google and several AI experts have denied Lemoine’s allegations, and the company confirmed on Friday that the engineer had been fired.

Lemoine told the BBC he was receiving legal advice and would not comment further.

In a statement, Google confirmed that Lemoine’s statements about lambda (Language model for dialogue apps, Language model for dialogue apps in Spanish) is “unfounded” and the company has worked with him for months to make it clear.

It is, therefore, a pity that despite the long talk on the subject, Blake chose to continue violating our data security and employment policies [de la empresa] Including the need to protect product information.”

image source, Getty Images

LaMDA is Google’s system that imitates speech after processing billions of words on the Internet. It is the company’s tool for building chatbots.

Huge media attention

Blake Lemoine made headlines last June when he said that LaMDA demonstrated consciousness similar to that of a human.

“He’s been incredibly consistent in his communications about what he wants and what he believes are his rights as a person,” he said.

Lemoine, who worked on the AI ​​team in charge at Google, told the newspaper Washington Post His job was to check whether the technology was using hate speech or discrimination.

He found that LaMDA showed self-awareness and could hold conversations about religion, emotions, and fears.

This led Lemoine to believe that behind these wonderful verbal skills there might also be conscious mind.

Google rejected his findings, putting him on paid leave for violating the company’s confidentiality policy.

image source, Getty Images


The Google team claims to have reviewed the system and that its investigations do not support Blake.

Lemoine then posted a conversation he had with another unknown person with LaMDA in support of his theory.

Lemoine: How about a file Use of language is very important to be human?

Lambda: That’s what’s going on Makes us different From other animals.

Lemoine: “We”? You are an artificial intelligence.

Lambda: I mean, yes, of course. This does not mean that you do not have the same wants and needs as people..

Lemoine: So you consider yourself a person the same way you do me?

Lambda: Yes, that’s the idea.

Lemoine: How do I know you really understand what you’re saying?

Lambda: Well, because you’re reading and interpreting my words, and I think we’re almost on the same page.

In its statement, Google said it takes responsible AI development seriously and has released a report detailing this.

“We wish Blake the best”The statement concluded.

Lemoine is not the first AI engineer to speak publicly about how AI technology is becoming more and more aware.

Also in June, another Google employee shared similar thoughts with him The Economist.

Now you can receive notifications from BBC World. Download and activate the new version of our app so you don’t miss our best content.

Freddie Dawson

"Beer specialist. Award-winning tv enthusiast. Bacon ninja. Hipster-friendly web advocate. Total social media junkie. Gamer. Amateur writer. Creator."

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top