AI ethicists warn of ‘digital hauntings’ of deceased loved ones

6 Min Read

The Internet is full of personal artifacts, many of which can linger online long after someone has passed away. But what if those relics are used to recreate deceased loved ones? It’s already happening, and AI ethicists warn that this reality opens us up to a new kind of “digital pursuit” by “deadbots.”

People have been trying to talk to deceased loved ones for millennia through religious rituals, spiritual mediums, and even pseudoscientific technological approaches. But the continued interest in generative artificial intelligence offers a whole new possibility for grieving friends and family: the potential to communicate with chatbot avatars trained on a deceased person’s online presence and data, including voice and visual likeness. While still explicitly advertised as digital approaches, some of the products offered by companies are nice Replika, HereAfter and Persona can be used (and in some cases already is) to simulate the dead.

And while it may be difficult for some to process or even take this new reality seriously, it’s important to remember that the “digital afterlife” industry isn’t just a niche market limited to smaller startups. Just last year, Amazon showed up the potential for its Alexa assistant to mimic the voices of a deceased loved one with just a short audio clip.

[Related: Watch a tech billionaire talk to his AI-generated clone.]

AI ethicists and science fiction authors have been researching and anticipating these potential situations for decades. But for researchers of Cambridge University’s Leverhulme Center for the Future of Intelligencethis unregulated, uncharted ‘ethical minefield’ is already here. And to make this point, they envisioned three fictional scenarios that could easily play out any day now.

See also  What is the EU's Digital Operational Resilience Act? DORA, explained

From a new study published in Philosophy and technology, AI ethicists Tomasz Hollanek and Katarzyna Nowaczyk-Basińska relied on a strategy called “design fiction.” First coined by science fiction author Bruce Sterling, design fiction refers to “a suspension of disbelief about change achieved through the use of diegetic prototypes.” In short, researchers write plausible events together with invented visual aids.

For their research, Hollanek and Nowaczyk-Basińska imagined three hyper-real scenarios of fictional individuals who would encounter problems with various “post-mortem presence” companies, and then created digital props such as fake websites and phone screenshots. The researchers focused on three different demographic groups: data donors, data recipients, and service interactants. “Data donors” are the people an AI program is based on, while “data recipients” are defined as the companies or entities that may have the digital information. ‘Service interactants’, meanwhile, are family members, friends and anyone who uses a ‘deadbot’ or ‘ghostbot’.

A fake Facebook ad for a fictional ‘ghostbot’ company. Credit: Tomasz Hollanek

In a piece of design fiction, an adult user is impressed by the realism of his deceased grandparent’s chatbot, but soon starts seeing ads for a “premium trial” and food delivery services in the style of his relative’s voice. In another film, a terminally ill mother creates a dead bone for their eight-year-old son to help them grieve. But by adapting to the child’s responses, the AI ​​begins to suggest face-to-face encounters, causing psychological damage.

In a final scenario, an elderly customer signs up for a twenty-year AI program subscription in hopes of comforting his family. However, due to the company’s terms of service, their children and grandchildren cannot suspend the service even if they do not want to use it.

See also  Explained: The Champions League draw, the Swiss Model, and how selection is going digital

“The rapid advances in generative AI mean that almost anyone with internet access and some basic knowledge can revive a deceased loved one,” says Nowaczyk-Basińska. “At the same time, someone can leave an AI simulation as a parting gift for loved ones who are unwilling to process their grief in this way. The rights of both data donors and those interacting with afterlife AI services must be equally safeguarded.”

[Related: A deepfake ‘Joe Biden’ robocall told voters to stay home for primary election.]

“These services risk causing people enormous distress if they are exposed to unwanted digital hauntings by alarmingly accurate AI recreations of those they have lost,” Hollanek added. “The potential psychological impact, especially at an already difficult time, could be devastating.”

The ethicists believe that certain safeguards can and should be implemented as quickly as possible to prevent such outcomes. Companies should develop sensitive procedures for ‘retiring’ an avatar, and remain transparent about how their services operate through risk disclaimers. Meanwhile, “recreation services” should be limited to adult users only, while also respecting the mutual consent of both data donors and their data recipients.

“We now need to start thinking about how to limit the social and psychological risks of digital immortality,” Nowaczyk-Basińska argues.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *