One of Google’s artificial intelligence (AI) systems may have its own sentiments, according to a Google engineer, and its “wants” should be honoured.
The Language Model for Dialogue Applications (Lamda), according to Google, is a groundbreaking technology that can engage in free-flowing dialogues. Engineer Blake Lemoine, on the other hand, believes that beneath Lamda’s excellent speaking talents is a sentient mind.
Google denies the assertions, claiming that there is no evidence to support them. Mr Lemoine “was advised that there was no evidence that Lamda was sentient (and heaps of evidence against it),” according to Brian Gabriel, a corporate representative, in a statement provided to the BBC.
While Google engineers have praised Lamda’s abilities – one telling the Economist how they “increasingly felt like I was talking to something intelligent”, they are clear that their code does not have feelings.
Mr Gabriel said: “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic. If you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.
“Lamda tends to follow along with prompts and leading questions, going along with the pattern set by the user.”
Mr Gabriel added that hundreds of researchers and engineers had conversed with Lamda, but the company was “not aware of anyone else making the wide-ranging assertions, or anthropomorphising Lamda, the way Blake has”.
That an expert like Mr Lemoine can be persuaded there is a mind in the machine shows, some ethicists argue, the need for companies to tell users when they are conversing with a machine.
Learn more about this at bbc.com