When Khaby Lame authorized the creation of an AI digital twin capable of generating content in his likeness across multiple languages and time zones, the transaction raised a question that legal systems around the world are only beginning to confront: in an era when artificial intelligence can faithfully replicate a person’s face, voice, and behavioral patterns, who actually owns a human identity?
The question is not philosophical. It is urgently practical. Every creator with a recognizable online presence is a potential target for unauthorized AI replication. Every brand deploying AI-generated content featuring real people is operating in a legal grey zone. And every platform hosting AI-generated content that mimics real individuals is navigating liability questions with no established precedent.
The Current Legal Landscape
The legal frameworks that currently govern the use of personal identity were designed for a fundamentally different technological context. In the United States, personality rights are protected through a patchwork of state-level “right of publicity” statutes and common law doctrines. Approximately 35 states recognize some form of right of publicity, but the scope, duration, and remedies vary significantly. There is no federal right of publicity statute.
In the European Union, personality rights are generally protected through privacy law, data protection regulations, and unfair competition principles rather than through a unified personality right. The United Kingdom relies on passing off, privacy torts, and data protection law — a patchwork approach that provides indirect protection but no comprehensive framework for identity rights.
These frameworks share a common limitation: they were designed to address the unauthorized use of a person’s existing image or likeness in specific commercial contexts like advertising. They were not designed to address the training and deployment of generative AI systems that can create entirely new performances, conversations, and commercial interactions using a person’s identity as the generative input.
State-by-State: The U.S. Right of Publicity Landscape
The United States presents the most complex jurisdictional picture for personality rights, because the right of publicity is a matter of state law with no federal statute providing a unified framework. The differences between states are not merely technical — they determine whether a creator has any actionable rights at all.
California offers some of the broadest protections. California Civil Code Section 3344 prohibits the unauthorized use of a person’s name, voice, signature, photograph, or likeness for commercial purposes. The statute provides for both actual damages and profits, with a statutory minimum of $750. Critically, California also recognizes a common-law right of publicity that extends beyond the statutory provisions. For creators based in California — and a disproportionate number of prominent U.S.-based creators are — this provides a relatively strong baseline of protection.
New York takes a notably different approach. New York’s Civil Rights Law Sections 50-51 provide protection against unauthorized commercial use of a person’s name, portrait, or picture, but have historically been interpreted more narrowly than California’s provisions. New York courts have generally required a direct, commercial use rather than the broader “connection with” standard applied in some other states. However, New York has been considering legislation to modernize these protections for the AI era.
Tennessee enacted the ELVIS Act (Ensuring Likeness Voice and Image Security Act) in 2024, becoming one of the first states to explicitly address AI-generated replicas. The Act specifically protects against the unauthorized use of AI-generated voice replicas and extends protection to cover synthetic media more broadly. The law’s name is a nod to Tennessee’s heritage as home to Elvis Presley — whose estate has long been at the center of post-mortem publicity rights litigation.
Illinois stands out for the Biometric Information Privacy Act (BIPA), which, while not technically a right of publicity statute, provides some of the most consequential protections for creator identity data. BIPA requires informed consent before biometric identifiers — including facial geometry and voiceprints — can be collected or stored. The Act’s private right of action has produced settlements exceeding $500 million in cases involving companies like Facebook (now Meta) and Google.
Texas, Florida, and Georgia have varying statutory and common-law protections, with Florida’s statute being notably broad in scope but limited in remedies. Texas recognizes both statutory and common-law rights of publicity, with protection that survives death — an important consideration for estate planning in the AI twin context.
At least 15 states currently have no statutory right of publicity at all, leaving creators in those jurisdictions to rely on common-law doctrines, unfair competition principles, or privacy torts — frameworks that may not extend to AI-generated synthetic content.
For creators operating nationally or globally, this patchwork creates significant enforcement challenges. An unauthorized AI digital twin that generates content visible across all 50 states simultaneously may be actionable in some jurisdictions and not others.
EU vs. U.S.: Two Regulatory Philosophies
The divergence between European and American approaches to personality rights reflects fundamentally different regulatory philosophies, and creators operating internationally must navigate both.
The United States approach is primarily property-based. The right of publicity is treated as an economic right — the ability to control and profit from the commercial use of one’s identity. The emphasis is on preventing unauthorized commercial exploitation. Non-commercial uses (parody, news reporting, artistic expression) are generally protected by First Amendment considerations. The result is a system that gives creators strong economic tools but limited control over non-commercial uses of their identity.
The European approach is primarily dignity-based. Continental European legal traditions treat personality rights as aspects of human dignity and personal autonomy, not merely economic assets. The GDPR reinforces this by treating biometric data as a special category of personal data subject to heightened protections. Under GDPR, the processing of biometric data for the purpose of uniquely identifying a natural person is generally prohibited unless one of a limited set of legal bases applies. Explicit consent is the most common basis, and that consent must be freely given, specific, informed, and unambiguous.
The practical implications for AI digital twin deployment are significant. In the United States, a creator’s primary concern is unauthorized commercial exploitation — someone using their face to sell products without permission. In the EU, the concern extends to any processing of biometric data without proper legal basis, regardless of commercial intent. An AI system that trains on a creator’s publicly available content to create a digital replica may violate GDPR even if the resulting twin is never used commercially, because the training process itself involves biometric data processing.
The UK, post-Brexit, occupies a middle position. It maintains GDPR-equivalent data protection through the UK GDPR but lacks the Continental tradition of comprehensive personality rights. Protection comes through a combination of passing off (requiring goodwill, misrepresentation, and damage), privacy torts (particularly misuse of private information), and data protection law.
For global creators, the optimal strategy is to structure identity rights protection to satisfy the most demanding jurisdiction — typically the EU — which provides baseline compliance that generally satisfies requirements elsewhere as well.
Landmark Cases Shaping AI Identity Law
Several legal proceedings are establishing the precedents that will govern AI-generated identity content for years to come.
The estate of a prominent entertainer recently pursued litigation against a technology company that used archival recordings to create an AI voice clone without authorization. The case raised the question of whether training an AI model on publicly available content constitutes the kind of “use” that right of publicity statutes contemplate. If the model is trained on public data but deployed to generate new commercial content, at what point does the unauthorized use occur — during training or during deployment?
Class action litigation under Illinois’ BIPA has produced landmark settlements that directly impact the AI identity space. These cases established that even passive biometric data collection — such as facial recognition tagging in photo databases — can constitute a BIPA violation if done without proper informed consent. For AI twin platforms that process creator biometric data, the BIPA precedent means that consent must be obtained before any biometric processing occurs, not after the model has been trained.
The SAG-AFTRA agreements reached following the 2023 Hollywood strikes included specific provisions governing the use of AI-generated replicas of performers. These provisions — requiring informed consent, separate compensation for AI use, and restrictions on the scope and duration of AI deployment rights — represent the most detailed negotiated framework for AI identity use to date. While they apply specifically to union performers, they provide a template that creators in other sectors are beginning to adopt.
The Generative Gap
The critical distinction between traditional identity misuse and AI-powered identity deployment is the shift from reproduction to generation. Traditional personality rights cases involve someone using an existing photograph, video clip, or voice recording without authorization. The unauthorized content is a copy of something that already exists.
AI digital twins are different. They generate new content that never existed before. When an AI system trained on a creator’s biometric data produces a livestream in which the creator’s digital replica promotes products and interacts with viewers in real time, the resulting content is not a reproduction. It is a synthetic creation — one that looks, sounds, and behaves like the creator, but was never performed by them.
This distinction creates significant legal uncertainty. If the content is not a reproduction, does it infringe reproduction-based rights? If the AI system generates the content autonomously, who is the author? If the digital twin says something defamatory or promotes a harmful product, who bears liability — the creator who licensed their identity, the company that deployed the twin, or the AI system itself?
Deepfake Proliferation and Creator Vulnerability
The urgency of these questions is amplified by the rapid proliferation of deepfake technology. The tools required to create convincing AI-generated replicas of real people are increasingly accessible and affordable. What once required sophisticated research labs can now be accomplished with consumer-grade software and modest computing resources.
For creators, this means that their identity can be — and increasingly is — replicated without their consent, compensation, or control. Unauthorized deepfakes of prominent creators are already widespread, used for everything from fraudulent endorsements to non-consensual content. The harms range from financial (fraudulent commercial use) to reputational (association with products or messages the creator would never endorse) to deeply personal.
The existing legal toolkit is inadequate to address this threat at scale. Filing individual takedown requests is a game of whack-a-mole. Pursuing litigation against anonymous deepfake creators is impractical. And platform-level content moderation systems are not reliably able to detect and remove AI-generated identity misuse.
The EU AI Act and Emerging Regulation
The European Union’s AI Act represents the most comprehensive regulatory effort to date in addressing AI-generated content involving real people. The Act introduces transparency requirements for AI-generated content, including obligations to disclose when content is synthetically produced. It also establishes restrictions on certain categories of biometric processing and creates a risk-based framework for governing AI systems.
However, the AI Act is primarily focused on regulating AI system providers and deployers, not on establishing positive rights for individuals whose identities are used in AI training and deployment. It addresses the supply side of the equation — what AI companies must do — but provides limited tools for the demand side — what individuals can claim and control.
In the United States, legislative activity is concentrated at the state level. Several states have enacted or proposed deepfake-specific legislation, typically focusing on non-consensual intimate imagery and election-related disinformation. Tennessee’s ELVIS Act, signed in 2024, was notably one of the first laws to explicitly address AI-generated voice replication. Federal legislation addressing broader digital identity rights has been proposed but not enacted.
What Creators Need: Sovereign Infrastructure
The legal landscape reveals a clear pattern: statutory frameworks are evolving, but they are years behind the technology. For the foreseeable future, the primary protection for a creator’s digital identity will come not from legislation, but from infrastructure — the technical and contractual systems that allow creators to establish, enforce, and monetize their identity rights.
This infrastructure needs to accomplish several things. It needs to provide sovereign storage for biometric data, ensuring that the raw materials of a creator’s digital identity are encrypted, portable, and controlled by the creator rather than by platforms or corporate partners. It needs to offer standardized consent frameworks that clearly define what a creator has authorized and what they have not. It needs to include automated rights management that tracks how a creator’s identity is being used across platforms and jurisdictions and ensures compliance with the terms of any licensing arrangement.
And it needs to create a legal backbone — standardized templates for personality rights licensing, dispute resolution mechanisms, and compliance frameworks that work across the fragmented global regulatory landscape.
Platform Compliance Comparison: Who Protects Creator Identity
Not all AI avatar platforms treat creator identity with the same level of care. A meaningful evaluation of platform options should include their compliance posture on personality rights.
Consent mechanisms. Leading platforms like Synthesia require documented consent from any individual whose likeness is used to create a custom avatar, including verification processes to confirm that the person depicted actually authorized the creation. Other platforms have less rigorous verification, creating the risk that unauthorized digital twins can be created with minimal friction.
Usage restrictions. Some platforms implement content moderation systems that prevent AI-generated avatars from appearing in contexts that could be harmful to the depicted individual — including political content, adult content, and misleading endorsements. Others provide minimal content restrictions, leaving the depicted individual exposed to reputational harm.
Data handling. Platforms that process biometric data for avatar creation have varying approaches to data retention, access, and deletion. GDPR-compliant platforms must provide data subjects with the right to access, correct, and delete their biometric data. Not all platforms have implemented these capabilities in a creator-friendly manner.
Rights management. The most advanced platforms are beginning to offer automated rights tracking — tools that monitor where and how an AI-generated avatar is being used across the internet. This capability is essential for creators who license their digital twin to commercial partners and need to verify compliance with the agreed terms.
For a detailed comparison of platform features and compliance postures, see our AI avatar platform category rankings.
Creator Protection Checklist: Practical Steps
Creators cannot wait for comprehensive legislation. The following actions provide practical protection within the current legal environment.
Register your trademarks. Register your name, stage name, and recognizable catchphrases as trademarks in your primary operating jurisdictions. Trademark registration provides a distinct legal tool for preventing unauthorized commercial use of your identity markers and creates a public record of your claimed rights.
Document your biometric assets. Create and maintain a comprehensive inventory of your biometric data — facial images, voice recordings, motion capture data — with timestamps and chain-of-custody documentation. This inventory serves as evidence of your identity assets if you need to pursue unauthorized use.
Review platform terms of service. Audit every platform on which you have an active presence and understand what rights you have granted over your content and identity data. Pay particular attention to clauses granting the platform rights to use your content for AI training, content recommendation, or derivative works.
Establish a consent framework. Create a standardized consent template — reviewed by qualified legal counsel — that specifies the precise scope, duration, territory, and restrictions of any identity licensing arrangement. Do not rely on verbal agreements or informal understandings for something as consequential as your digital identity.
Engage with biometric sovereignty infrastructure. As platforms offering sovereign biometric data storage and management become available, early adoption provides both practical protection and a signal to potential commercial partners that you take identity governance seriously.
Monitor for unauthorized use. Implement regular monitoring for unauthorized AI-generated content using your likeness. Tools from Sensity AI and similar deepfake detection platforms can help identify unauthorized replicas. Document every instance for potential enforcement action.
Consult specialized legal counsel. The intersection of personality rights, AI law, and digital commerce is a rapidly evolving specialty. General entertainment lawyers may not have sufficient expertise. Seek counsel with specific experience in AI identity transactions and biometric data regulation.
The Case for Proactive Protection
Creators who wait for legislation to catch up to the technology will find themselves unprotected during the most critical period of the AI identity transition. The creators who take proactive steps — vaulting their biometric data, establishing clear ownership frameworks, and engaging with identity sovereignty infrastructure — will be positioned to capture the enormous economic opportunity that AI digital twins represent while maintaining control over their most valuable asset: themselves.
The Khaby Lame deal demonstrated what happens when the commercial potential of identity is realized before the legal infrastructure catches up. The $975 million valuation validated the economic thesis. The subsequent stock collapse illustrated the structural risks. The lesson for every creator is clear: personality rights protection is not a secondary consideration that follows commercial success. It is a prerequisite for sustainable identity commercialization.
The age of AI has made every person with a recognizable digital presence a potential economic asset. The question is whether that asset will be controlled by the person it belongs to, or by the platforms, corporations, and bad actors who are already racing to exploit it.
The answer will depend less on what legislators do and more on what infrastructure creators choose to adopt.