It can’t get creepier and spookier than this, as Amazon on Wednesday demonstrated a new feature that enables its virtual assistant, Alexa, to mimic the voices of users’ dead relatives. Yes, you read that right!

The company demoed the feature at Amazon’s Re: MARS (Machine Learning, Automation, Robots, and Space) conference in Las Vegas. In the short video clip, a boy is asking Alexa to read “The Wizard of Oz” story in the voice of his dead grandmother.

Alexa acknowledges the child’s request in her default, robotic voice, and then immediately transitions to a softer, more humanlike tone, apparently mimicking the voice of the child’s dead grandmother, and narrates an excerpt from the children’s novel.

“As you saw in this experience, instead of Alexa’s voice reading the book, it’s the kid’s grandma’s voice,” said Rohit Prasad, Amazon’s Senior Vice President and Head Scientist for Alexa AI.

Prasad introduced the clip by saying that adding “human attributes” to AI systems was increasingly important “in these times of the ongoing pandemic when so many of us have lost someone we love.”

“While AI can’t eliminate that pain of loss, it can definitely make their memories last,” he added.

Check out the demo video below:

 

The Alexa team is teaching the digital assistant to mimic the voice of anyone it hears from just a one-minute recording of that audio.

The company is pitching the functionality as a way to help people preserve memories, especially those who lost their loved ones due to COVID-19.

“This required inventions where we had to learn to produce a high-quality voice with less than a minute of recording versus hours of recording the studio,” Prasad said during the conference.

“The way we made it happen is by framing the problem as a voice conversion task and not a speech generation path. We are unquestionably living in the golden era of AI, where our dreams and science.”

Currently, the new feature is in development, and Amazon has not revealed when it plans to roll out the feature to the public. After Alexa’s voice-imitation feature was announced, some took to Twitter to express concern about the capability, which could be misused by scammers and cybercriminals as well as be used to mimic the voices of other people without their consent.