The AI assistant Alexa now has the ability to replicate the sounds of customers’ deceased relatives thanks to a new capability that Amazon has made public.
At its annual MARS Conference, the firm demonstrated the function by playing a video in which a young boy begs Alexa to read him a bedtime story in the voice of his deceased grandmother.
Rohit Prasad, Amazon’s head scientist for Alexa AI, introduced the clip by saying that adding “human attributes” to AI systems was becoming more and more crucial “in these times of the ongoing pandemic, when so many of us have lost someone we love.” Prasad also said that instead of Alexa’s voice reading the book, it was the child’s grandma’s voice.
“While AI can’t eliminate that pain of loss, it can definitely make their memories last,” said Prasad.
After doing so, Alexa starts reading a portion from the book in a tone and voice that successfully mimics that of the child’s relative.
Prasad claims that using just “less than a minute” of audio recordings from the child’s grandma, Alexa was able to imitate the voice. There was no need to work lengthy hours at a recording studio.
Amazon says its computers can learn to duplicate someone’s voice from just one minute of recorded audio, but it has not indicated whether this function would ever be made available to the general public. This means that anyone can easily clone the voices of loved ones or other people they like in the age of readily available movies and voice notes.
Although users on social media have already criticized one particular application, calling the function “creepy” and a “monstrosity,” such AI voice impersonation has been more popular recently. These impersonations are frequently employed in fields including podcasting, film and television, and video games and are referred to as “audio deepfakes.”
The potential for impersonation schemes using voice cloning technology has the security community concerned. A popular episode(Opens in a new window) of the science fiction series Black Mirror, in which a grieving wife recreates her deceased spouse as a virtual assistant and then a robot, is being compared by critics to Amazon’s presentation. Ironically, the circumstance that results only makes the wife’s anguish worse.
IN PODCASTING AND FILM, AUDIO DEEPFAKES ARE COMMON
For instance, a lot of audio recording programs let users clone specific voices from their recordings. As a result, a sound engineer can easily modify what a podcast presenter has stated if they mess up their line, for instance, by entering in a new script. Lines of continuous speech must be duplicated by hand, but minute changes can be adjusted with just a few clicks.
Film has also made use of this technology. It was discovered in 2017 that a documentary about the life of chef Anthony Bourdain—who passed away in 2018—used artificial intelligence to clone his voice and read passages from emails he sent. Others defended the usage of the technology as akin to other reconstructions used in documentaries while many fans were appalled by the application, calling it “ghoulish” and “deceptive.”
It’s true that a lot of people all over the world are already using AI for this reason, as Amazon’s Prasad stated that the functionality might allow users to build “lasting personal relationships” with the departed.
For instance, using AI trained on previously recorded conversations, people have already developed chatbots that mimic deceased family members. With today’s AI technology, adding accurate voices to these systems is perfectly doable, and even video avatars are expected to become more common.
A completely other question is whether or not people will choose to turn their deceased loved ones into digital AI puppets.