On October 17, 2023, the Beatles released a new song featuring the voice of John Lennon, who died in 1980. AI separated Lennon’s vocals from a decades-old demo recording and cleaned them to studio quality. The song, “Now and Then,” reached number one in multiple countries. In one track, the entire spectrum of posthumous AI — technology, law, commerce, ethics, and culture — became visible.
The AI digital twin conversation is not only about the living. As the technology for reproducing human identity becomes more capable, the question of what happens to a person’s digital identity after death becomes commercially and ethically urgent. Estates, families, technology companies, and society are grappling with questions that have no historical precedent.
The Technology
Creating a posthumous AI twin requires assembling a digital representation from data the person left behind. The quality of the output depends entirely on the quality and quantity of available data.
Visual reproduction. AI systems trained on photos and video footage can generate realistic AI avatars of deceased individuals. D-ID, HeyGen, and similar platforms can create talking head videos from a single photograph, though the quality improves significantly with more source material.
Voice reproduction. ElevenLabs, Respeecher, and Resemble AI can clone voices from audio recordings. Respeecher has particular expertise in this area, having recreated voices for Hollywood productions including posthumous performances. The minimum viable input is a few minutes of clear audio; higher-fidelity reproduction requires more extensive recordings.
Conversational reproduction. The most ambitious and controversial application. Large language models fine-tuned on a person’s written communications (emails, social media posts, books, interviews) can generate text responses in the person’s style. Combined with voice cloning and avatar technology, this creates an interactive AI twin that responds to conversation as the deceased person might have.
The quality spectrum ranges from simple photo-to-video animations (technically achievable with minimal data) to fully conversational digital humans (requiring extensive data and significant development). Most commercial deployments fall in the middle — AI-generated video presentations using the deceased person’s likeness and cloned voice.
The Legal Framework
Posthumous personality rights — the right to control commercial use of a deceased person’s identity — vary dramatically by jurisdiction.
United States. Approximately 30 states recognize post-mortem personality rights, with protection periods ranging from 10 years (some states) to 100 years (Indiana, Oklahoma) after death. California, a critical state for entertainment industry rights, provides 70 years of post-mortem protection. In states without specific post-mortem rights, estates may have limited ability to prevent AI twin creation.
European Union. GDPR protections for personal data end at death under the regulation itself, but EU member states can extend protections. France, for example, provides certain post-mortem data rights. Personality rights under national law vary significantly across EU countries.
United Kingdom. No general post-mortem personality rights, though passing off and defamation law may provide limited protection for estates against misleading AI twin deployments.
The legal landscape creates a patchwork where the same AI twin might be legal in one jurisdiction and illegal in another. Estates and technology companies must navigate this complexity when deploying posthumous AI twins commercially.
The Ethical Terrain
The ethical debate around posthumous AI twins centers on five tensions.
Consent
The fundamental ethical challenge: the deceased person cannot consent to their AI twin’s creation or deployment. Even if the person expressed views about digital legacy during their lifetime, they could not have anticipated the specific capabilities and uses of current AI technology. Proxy consent from estates is a practical necessity but an imperfect substitute for direct consent.
Accuracy and Representation
An AI twin generates content the deceased person never actually said or did. If the AI produces statements, endorsements, or performances that the person would have opposed, the AI twin misrepresents the person in a way they cannot correct. The risk of putting words in the mouths of the dead is not theoretical — it is inherent in the technology.
Emotional Exploitation
Grief technology companies offer families the ability to interact with AI representations of deceased loved ones. While some families find comfort in this, mental health professionals express concern about potential interference with healthy grief processing. The commercial model — charging grieving families for AI interaction with their deceased relatives — raises ethical questions about exploitation.
Dignity
Cultural, religious, and personal views on death vary enormously. Some traditions view the body and likeness of the deceased as sacred. Some individuals would find posthumous digital representation degrading or offensive. Respecting these diverse perspectives requires sensitivity that commercial deployment models may not prioritize.
Historical Accuracy
AI twins of historical figures created for educational content raise questions about accuracy. An AI Albert Einstein explaining physics may inspire students — but if the AI generates historically inaccurate statements or modern opinions the real Einstein never held, it creates confusion between historical record and AI fiction.
Commercial Models
Despite the ethical complexity, a market for posthumous AI twins exists and is growing.
Celebrity estate licensing. Estates of deceased entertainers, athletes, and public figures license posthumous AI twin rights for new performances, endorsements, and content. The estate of a major celebrity can generate ongoing revenue from AI twin appearances in concerts, advertisements, and media. Respeecher has worked on recreating voices for major Hollywood productions using deceased performers.
Memorial services. Technology companies offer families AI-powered memorial products — interactive AI twins for personal remembrance, AI-generated messages for special occasions, and digital memory preservation services.
Educational and cultural content. AI twins of historical figures serve as presenters in educational content, museum exhibits, and cultural preservation projects. When done carefully — with clear disclosure that the content is AI-generated and with attention to historical accuracy — this application has genuine educational value.
A Responsible Framework
Organizations creating posthumous AI twins should adopt a responsibility framework that addresses the ethical concerns.
Respect documented wishes. If the deceased expressed preferences about digital legacy, follow them. If they did not, err toward conservative use with estate consent.
Disclose clearly. All posthumous AI twin content should be clearly labeled as AI-generated. Audiences must understand they are interacting with an AI representation, not the actual person.
Maintain accuracy. AI twins should not generate statements, endorsements, or opinions that the deceased person did not hold or would likely have opposed. When speculative content is necessary (educational scenarios), it should be clearly marked as such.
Protect family interests. Families of the deceased should have approval rights over commercial posthumous AI twin deployments, regardless of estate ownership structures.
Time-limit commercial use. Perpetual commercial exploitation of a deceased person’s AI likeness is ethically problematic. Time-limited deployments with sunset provisions respect the transition from commercial asset to historical memory.
For the legal framework underlying posthumous AI rights, see our personality rights analysis and legal framework coverage.