LaMDA: The AI ​​system that “gained awareness and feeling” according to a Google engineer

A thinking and feeling machine. This is how Blake Lemoine pointed out, google engineer, a LaMDA, Google’s artificial intelligence system. It spread quickly. We read it everywhere. But, How is this “machine”?

Referring to old science fiction movies, you can imagine LaMDA as a file The humanoid robot that opens its eyes, perceives and speaks. Or like HAL-9000, the super computer 2001 space flight and what in The Simpsons, As a parody, he has the voice of Pierce Brosnan, loves Marge and wants to kill Homer.

The truth is more complicated: lambda is too An artificial brain, hosted in the cloud, feeding on trillions of texts as it trains itself. But at the same time he looks like a parrot.

Complex? Let’s try to break it down to understand it better.

CAN LaMDA (Language Model for Dialog Applications, Language Model for Spanish Dialog Applications) Designed by Google in 2017 is based on adapteri.e. a deep artificial neural network.

A huge neural network that is trained with large amounts of text, this is LaMDA.Getty Images

This neural network Practice with loads of texts. But learning is by objectivity and is presented as a game. It contains a whole sentence but you take a word and the system has to guess it,” explains Julio Gonzalo Arroyo, professor at UNED (National University of Distance Education) in Spain and principal investigator in the Department of Natural Language Processing and Language Information Retrieval.

Play with yourself. The system puts out words by trial and error, and when it makes a mistake, as if it were a children’s activity booklet, it looks at the last pages, sees the correct answer, and thus corrects the parameters, adjusts.

See also  WhatsApp will have a companion mode: How it works

At the same time, “It defines the meaning of each word and Pay attention to the words around her‘ says Gonzalo Arroyo.

Thus he becomes a specialist in predicting patterns and words. Just like predictive text on your cell phone, here it’s only expanded to the ninth degree, with much more memory.

But LaMDA also creates smooth responses, not stifling and, according to Google, the ability to recreate the dynamism and recognize the nuances of human conversation. In short: Doesn’t look like a mechanism.

lemon and google "Looks like he doesn't care what's going on" with LaMDA.
As for Lemoine, it appears that Google “has no interest in knowing what’s going on” with LaMDA.Getty Images

This liquidity is one of Google’s goals, as explained in their tech blog. They say they get it, and they notice it The answers are quality, they are specific and there is utility.

In order to have quality, it must make sense. For example, if you tell LaMDA I started playing guitar“He should answer something about this and not some nonsense.

In order to achieve the second objective, he should not answer “okay”, but rather answer something more specific, such as “Which type of guitar do you prefer more, Gibson or Fender?”.

And in order for the system to provide answers that show interest, insightful, it moves to a higher level, such as: “A Fender Stratocaster is a good guitar, but Brian May’s Red Special is unique.”

Why give answers with this level of detail? Like we said, she’s training herself. “After reading billions of words, he has an extraordinary ability to guess the most appropriate words in every context.”

"It doesn't make sense to embody current conversation models"Google confirms.
They say at Google: “It doesn’t make sense to flesh out current conversation models.”Getty Images

For experts in artificial intelligence, adapters such as LaMDA were a milestone because they “allowed for very efficient processing (of information and text) and produced a real revolution in the field of natural language processing”.

See also  How does optical sound work, the function that improves the sound of the TV

Another goal in LaMDA training, according to Google, is Does not create “violent or gory content, or promote slanderous or hateful stereotypes.” towards groups of people, or that contain profanity,” they think of their blog on Artificial Intelligence (AI).

It is also required that answers are based on facts and that there are known external sources.

“With LaMDA we count a A calculated and meticulous approach to taking into account valid concerns about fairness and honesty‘,” says Brian Gabriel, a Google spokesperson.

It asserts that the system has undergone 11 separate reviews of AI principles “along with rigorous research and testing based on key metrics for quality, security, and the system’s ability to produce fact-based data”.

How do you make a system like LaMDA free of prejudice and hate speech?

“The The key is to identify the data (with any text sources) being fed,” says Gonzalo.

But it’s not easy: “Our way of communicating reflects our biases, and therefore the machines learn them. It is difficult to remove them from the training data without removing their representativeness, he explains.

he is called, can show prejudices.

"It is relatively easy to deceive people"Gonzalo maintains.
“It’s relatively easy to deceive people,” Gonzalo says.Getty Images

“If you give him news of Queen Letizia (of Spain) and in each of them comment on the clothes she wears, it is possible that when the order is asked about her, he will repeat this masculine style and talk about clothes and not about other things,” says the expert.

In 1966, the ELIZA system was designed that applied very simple patterns to simulate the dialogue of a psychotherapist. “The regime encouraged the patient to tell him more, no matter what the topic of conversation was, and elicited patterns of the kind, ‘If he mentions the word family, ask him what his relationship to his mother is,’” Gonzalo says.

See also  Can it be saved?: Mars dust complicates tasks on this planet

There are people who believed that Elisa was really a psychotherapist: they even claimed that she helped them.

“The It is relatively easy to deceive people‘, maintains Gonzalo, who finds Lemoine’s statement that LaMDA has become self-aware ‘is an exaggeration.

According to Professor Gonzalo, Phrases like Lemoine don’t help a healthy discussion about artificial intelligence.

“Listening to this kind of nonsense (nonsense) does no good. We risk that people will get obsessed and think that we are in the matrix mode and that machines are smarter and they are going to kill us. That’s far, it’s my imagination. I don’t think it’s helpful to have a warm conversation about the benefits of AI. “

Because while the conversation is fluid, quality, and specific, “it’s nothing more than a giant formula that fine-tunes to better predict the next word. You have no idea what you’re talking about.”

Google’s response is similar. “These systems simulate the kinds of exchanges found in millions of sentences, and they can be about any cool topic: If you ask them what it’s like when it’s a frozen dinosaur, they can create texts about melting and roaring, etc.” from Google.

Researchers Emily Bender or Timnit Gebru referred to these language creation systems as “random parrots”which are repeated randomly.

Thus, as researchers Ariel Gersenweig and Ramon Sanguisa said, transformers Like LaMDA they understand what they write as much as the parrot that sings the tango The day you love me.

Lovell Loxley

"Alcohol buff. Troublemaker. Introvert. Student. Social media lover. Web ninja. Bacon fan. Reader."

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top