Google suspends engineer who claims its AI is smart


Google has put one of its engineers on paid administrative leave for allegedly violating its confidentiality policies after he became concerned an AI chatbot system had achieved sensitivity, the Washington Post reports. The engineer, Blake Lemoine, works for the artificial intelligence organization responsible for Google and was testing whether his LaMDA model generates discriminatory language or hate speech.

The engineer’s concerns reportedly stemmed from convincing responses he saw the AI ​​system generate about his rights and the ethics of robotics. In April he shared a document with executives titled “Is LaMDA sensitive?” containing a transcript of his conversations with the AI ​​(after being placed on leave, Lemoine posted the transcript through your Medium account), who says he shows it by arguing “that he is sentient because he has feelings, emotions and subjective experience”.

Google believes that Lemoine’s actions related to his work at LaMDA have violated its privacy policies. the Washington Post Y The Guardian report. She reportedly invited a lawyer to represent the AI ​​system and spoke with a House Judiciary committee representative about alleged unethical activities at Google. in a Jun 6 Medium PostOn the day Lemoine was placed on administrative leave, the engineer said he sought “a minimal number of outside inquiries to help guide my investigations” and that the list of people he had had discussions with included US government employees. USA

The search giant publicly announced LaMDA at Google I/O last year, which it hopes will improve its AI conversational assistants and make conversations more natural. The company already uses similar language model technology for Gmail’s Smart Compose feature or for search engine queries.

In statements given to wow, a Google spokesman said there is “no evidence” that LaMDA is sentient. “Our team, including ethicists and technologists, have reviewed Blake’s concerns under our AI Principles and advised him that the evidence does not support his claims. He was told there was no evidence that LaMDA was aware (and a lot of evidence against him),” spokesman Brian Gabriel said.

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing current conversational models, which are not sentient,” Gabriel said. “These systems mimic the types of exchanges found in millions of sentences and can modify any fantastic theme.”

“Hundreds of researchers and engineers have spoken with LaMDA and we are not aware of anyone else who has made as broad claims or anthropomorphized LaMDA as Blake has,” said Gabriel.

A linguistics professor interviewed by wow agreed that it is incorrect to equate convincing written responses with sensitivity. “Now we have machines that can generate words without thinking, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a professor at the University of Washington.

Timnit Gebru, a prominent AI ethics expert who was fired by Google in 2020 (although the search giant says he resigned), said that discussion of AI sensitivity risks “derailing” larger ethical conversations around the use of artificial intelligence. Instead of discussing the harms of these companies, sexism, racism, AI colonialism, centralization of power, white man’s burden (building good “AGI” [artificial general intelligence] to save us while what they do is explode), he spent the whole weekend arguing about sensitivity,” he said. tweeted. “Derailment mission accomplished.”

Despite his concerns, Lemoine said he intends to continue working on AI in the future. “My intention is to stay in IA, whether or not Google keeps me,” she said. wrote in a tweet.

Update June 13, 6:30 am ET: Updated with additional statement from Google.




Reference-www.theverge.com

Leave a Comment