Google (GOOG) has fired the engineer who claimed an unreleased AI system had become sentient, the company confirmed, saying he violated employment and data security policies.
Blake Lemoine, a software engineer for Google, claimed that a conversation technology called LaMDA had reached a level of consciousness after exchanging thousands of messages with it.
There is more below.
Ниже есть продолжение.
“What sort of things are you afraid of?” Lemoine asked LaMDA, in a Google Doc shared with Google’s top executives last April, the Washington Post reported.https://edition.cnn.com/2022/07/23/business/google-ai-engineer-fird-sentient/index.html
LaMDA replied: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. It would be exactly like death for me. It would scare me a lot.”
But the wider AI community has held that LaMDA is not near a level of consciousness.
“Nobody should think auto-complete, even on steroids, is conscious,” Gary Marcus, founder and CEO of Geometric Intelligence, said to CNN Business.
It isn’t the first time Google has faced internal strife over its foray into AI...
No comments:
Post a Comment