Google Sidelines Engineer Who Claims Its AI Is Sentient

SAN FRANCISCO – Google has placed an engineer on paid leave recently after dismissing its claim that its artificial intelligence is sentient, surfacing yet another fracas about the company’s most advanced technology.

Blake Lemoine, a senior software engineer at Google’s Responsible AI organization, said in an interview that he was put on leave Monday. The company’s human resources department said it violated Google’s confidentiality policy. The day before his suspension, Mr. Lemoine said he handed over documents to a US senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination.

Google said its systems could have imitated conversational exchanges and riffs on different topics, but did not have consciousness. “Our team – including ethicists and technologists – has reviewed Blake’s concerns regarding our AI principles and has informed him that the evidence does not support his claims,” ​​Brian Gabriel, a Google spokesman, said in a statement. “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.” The Washington Post first reported. Lemoine’s suspension.

For months, Mr. Lemoine had tussled with Google managers, executives and human resources over its surprising claim that the company’s language model for Dialogue Applications, or LaMDA, had a consciousness and a soul. Google says hundreds of its researchers and engineers have interacted with LaMDA, an internal tool, and reached a different conclusion than Mr. Lemoine did. Most AI experts believe the industry is a very long way from computing sentience.

Some AI researchers have long made optimistic claims about these technologies coming soon, but many others are extremely quick to dismiss these claims. “If you used these systems, you would never say such things,” said Emaad Khwaja, a researcher at the University of California, Berkeley, and the University of California, San Francisco, who is exploring similar technologies.

While chasing the AI ​​vanguard, Google’s research organization has spent the last few years mired in scandal and controversy. The division’s scientists and other employees are regularly feuded over technology and personnel matters in episodes that are often spilled into the public. In March, Google fired a researcher who sought to publicly disagree with two of his colleagues’ published work. And two of the dismissed AI ethics researchers, Timnit Gebru and Margaret Mitchell, after they criticized Google language models, have continued to cast a shadow on the group.

Mr. Lemoine, a military veteran who described himself as a priest, an ex-convict and an AI researcher, told Google executives as senior as Kent Walker, president of global affairs, that he believed LaMDA was a child of 7 or 8 years old. General Chat Chat Lounge He wanted the company to seek the computer program’s consent before running experiments on it. His claims were founded on his religious beliefs, which he said the company’s human resources department discriminated against.

“They have repeatedly questioned my sanity,” Mr. Lemoine said. “They said, ‘Have you been checked out by a psychiatrist recently?'” In the months before he was placed on administrative leave, the company had suggested he take a mental health leave.

Yann LeCun, head of AI research at Meta and a key figure in the rise of neural networks, said in an interview this week that these types of systems are not powerful enough to attain true intelligence.

Google’s technology is what scientists call a neural network, which is a mathematical system that learns skills by analyzing large amounts of data. By pinpointing patterns in thousands of cat photos, for example, it can learn to be a cat.

Over the past several years, Google and other leading companies have designed neural networks that have learned from enormous quantities of prose, including unpublished books and Wikipedia articles by the thousands. These “large language models” can be applied to many tasks. They can summarize articles, answer questions, generate tweets and even write blog posts.

But they are extremely flawed. Sometimes they generate perfect prose. Sometimes they generate nonsense. The systems are very good at recreating patterns they have seen in the past, but they can’t reason like a human.

Leave a Comment