This scenario has inspired many science fiction works, including: 2001, space flight And his killer HAL 9000 computer on board, appeared in the news after an interview with Washington Post (A new window) With the engineer involved, Blake Lemoine.
The latter’s mandate was, initially, to enter into a conversation with the bot, called LaMDA for a “Language Model for Dialog Applications” or
Language model for dialogue functions to see if he delivered a discriminatory or hateful speech.
A chatbot like this learns by imitation and feeds on billions of words, texts, and discussions on the Internet, Wikipedia, or anywhere else on the Internet.
But the engineer was confused by some of the robot’s reactions. When asked what kinds of things he was afraid of, Lambda said he had
Deep fear of being extinguished. The robot wrote: “It would be just like death to me, according to him Washington Post. It will scare me so much »
Google refutes this claim
Blake Lemoine shared his thoughts in a document sent to Google executives that compiled the conversations he had with LaMDA.
But these people did not stick to his idea.
Our team – including ethicists and technologists – has reviewed the concerns Blake has raised about adhering to our AI principles and informed him that the evidence does not support his claim.a Google spokesperson said at Washington Post.
Google had indicated a danger
incarnation [donner un aspect humain à une chose] Chatbots.
These systems mimic the kind of exchanges that appear in millions of sentences and can improvise on any great topic.The spokesperson argued.
Blake Lemoine, dissatisfied with the employer’s reaction, contacted the media and politicians about the matter, which resulted in him being suspended for violating company confidentiality rules.
He has since published his full text
an interview With LaMDA on a blog, where he attacks his employer, he accuses him of wanting to silence critics on the moral level of his AI technologies.
With information from the Washington Post