Recently, when software engineer Blake Lemoine interviewed Google’s chatbot LaMDA, its responses were so real that it convinced many that it was sentient.
Artificial intelligence-based Large Language Models (LLMs) software can take in a few sentences and come up with convincing replies.
Last year, San Francisco Chronicle carried a story (recalled in the recent MIT Technology Review) about a person who uploaded old texts and Facebook messages from his deceased fiancée and created a chatbot version of her. It reportedly gave him a lot of peace.
Artificial intelligence has made it possible to mimic voices, called ‘voice cloning’. In June, Amazon shared an audio of a little boy listening to a passage from The Wizard of Oz, read by his recently deceased grandmother.
“Her voice was artificially re-created using a clip of her speaking that lasted for less than a minute,” says MIT Technology Review.
From a reading dead grandmother to a talking dead grandmother is but a small leap. So, you can ‘talk to the dead’.
Chatbots aren’t sentient but they can pretend to be if fed with lots and lots of data. In today’s world that shouldn’t be a problem. Only, things could swing to the other extreme — two chatbots in an unending conversation with each other.