Google has fired an engineer who claimed that artificial intelligence is sentient

Google believes Lemoine's actions connected to his work on LaMDA have broken corporate confidentiality regulations.

Google has fired an engineer who claimed that artificial intelligence is sentient

According to the Washington Post, Google has placed one of its engineers on paid administrative leave for allegedly violating its confidentiality regulations after he became concerned that an AI chatbot system had attained sentience. The engineer, Blake Lemoine, was studying whether Google’s LaMDA model produced discriminatorily or hated speech for the company’s Responsible AI team.

The engineer’s concerns were apparently sparked by the AI system’s compelling responses to questions about its rights and robotics ethics. In April, he shared with executives a document titled “Is LaMDA Sentient?” that included a transcript of his conversations with the AI (after being placed on leave, Lemoine published the transcript via his Medium account), in which he claims the AI argues “that it is sentient because it has feelings, emotions, and subjective experience.”

According to The Washington Post and The Guardian, Google believes Lemoine’s actions connected to his work on LaMDA have broken corporate confidentiality regulations. He allegedly summoned a lawyer to represent the AI system and met with a member of the House Judiciary Committee to discuss alleged unethical practices at Google. The engineer stated he sought “a limited amount of outside input to help lead me in my investigations” in a post on June 6th, the day he was placed on administrative leave, and that the list of people he had spoken with included the US government workers.

Google launched LaMDA last year:

Last year, at Google I/O, the search giant launched LaMDA, which it thinks would improve its conversational AI helpers and allow for more natural discussions. Similar language model technology is already used by the firm for Gmail’s Smart Compose function and search engine inquiries.

A Google official told WaPo that there is “no evidence” that LaMDA is sentient in a statement. “Blake’s concerns have been investigated by our team, which includes ethicists and technologists, in accordance with our AI Principles, and we have notified him that the evidence does not support his assertions.” “He was told there was no indication that LaMDA was sentient (and plenty of evidence that it wasn’t),” spokeswoman Brian Gabriel stated.

“Of course, some in the broader AI community are thinking about the long-term prospect of sentient or general AI,” Gabriel explained, “but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which aren’t sentient.” “These systems can riff on any fanciful topic by mimicking the types of conversations present in millions of phrases.”

“We are not aware of anyone else making the broad assumptions, or anthropomorphizing LaMDA, the way Blake has,” Gabriel added.

According to a linguistics professor quoted by WaPo, equating impressively written responses with sentience is wrong. “We now have machines that can generate words without thinking, but we haven’t figured out how to stop assuming a mind behind them,” said Emily M. Bender of the University of Washington.

Exit mobile version