“Creepy.” “Morbid.” “Monstrosity.”
Those had been simply among the reactions that poured in over social media when Amazon.com Inc’s Alexa digital assistant impersonated a grandmother studying an excerpt from The Wonderful Wizard Of Oz.
It all began innocently sufficient, with Alexa chief scientist Rohit Prasad making an attempt to reveal the digital assistant’s humanlike mien throughout an organization presentation Wednesday. Prasad mentioned he’d been shocked by the companionable relationship customers develop with Alexa and needed to discover that. Human traits like “empathy and affect” are key for constructing belief with folks, he added.
“These attributes have become more important in these times of the ongoing pandemic, when so many of us have lost someone we love,” he mentioned. “While AI can’t eliminate that pain of loss, it can definitely make their memories last.”
The presentation left the impression that Amazon was pitching the service as a instrument for digitally elevating the lifeless. Prasad walked that again a bit in a subsequent interview on the sidelines of Amazon’s re:MARS expertise convention in Las Vegas, saying the service wasn’t primarily designed to simulate the voice of lifeless folks.
“It’s not about people who aren’t with you anymore,” he mentioned. “But it’s about, your grandma, if you want your kid to listen to grandma’s voice you can do that, if she is not available. Personally I would want that.”
As the presentation ricocheted across the Web, the creep issue dominated the discourse. But extra severe issues emerged, as properly. One was the potential for deploying the expertise to create deepfakes – on this case utilizing a reputable recording to imitate folks saying one thing they haven’t really vocalised.
Siwei Lyu, a University of Buffalo professor of pc science and engineering whose analysis includes deepfakes and digital media forensics, mentioned he was involved in regards to the growth.
“There are certainly benefits of voice conversion technologies to be developed by Amazon, but we should be aware of the potential misuses,” he mentioned. “For instance, a predator can masquerade as a family member or a friend in a phone call to lure unaware victims, and a falsified audio recording of a high-level executive commenting on her company’s financial situation could send the stock market awry.”
While Amazon didn’t say when the brand new Alexa function could be rolled out, comparable expertise may ultimately make such mischief lots simpler. Prasad mentioned Amazon had discovered to simulate a voice primarily based on lower than a minute of that individual’s speech. Pulling that off beforehand required hours in a studio. – Bloomberg