The legal framework governing AI digital identity is being constructed in real time. As the technology to create synthetic representations of human beings advances from experimental to commercial — exemplified by the $975 million Khaby Lame transaction — legislatures, courts, and regulatory bodies worldwide are racing to establish rules for an asset class that did not exist two years ago.

The result, as of March 2026, is a patchwork of overlapping and sometimes contradictory regulations. The EU has moved fastest with comprehensive legislation. The United States is proceeding state by state. Asia-Pacific jurisdictions are adopting targeted approaches. And significant gaps remain where creators, platforms, and enterprises operate without clear legal guidance.

This analysis surveys the global regulatory landscape for AI digital identity, identifying the key frameworks, their implications, and the areas where legal uncertainty creates both risk and opportunity.

The European Union: The AI Act Framework

The EU AI Act, which entered force in stages beginning August 2024 with full application by August 2026, represents the most comprehensive regulatory framework for AI-generated human representations anywhere in the world.

Classification and Requirements

The Act classifies AI systems on a risk-based framework. Systems that generate synthetic media of identifiable real persons are classified as high-risk in most commercial applications, triggering several mandatory requirements.

Transparency obligations require that all AI-generated content depicting real persons must be clearly labeled as synthetic. This applies to video, audio, and images. The labeling must be machine-readable (embedded metadata) and human-perceivable (visible disclosure). For AI digital twins deployed in commerce, customer service, or content creation, this means every piece of output must carry clear disclosure.

Consent documentation mandates that deployers of AI systems using a person’s likeness or voice must maintain verifiable consent records. For AI twin platforms like HeyGen and Synthesia, this has required implementing consent verification workflows and maintaining audit trails.

Technical safeguards require that high-risk AI systems include mechanisms to prevent misuse. For voice cloning platforms like ElevenLabs and Resemble AI, this has accelerated the adoption of watermarking, consent verification, and detection capabilities.

Enforcement and Penalties

Violations of the AI Act can result in fines of up to 35 million euros or 7% of global annual turnover, whichever is higher — a penalty structure that exceeds even the GDPR in potential magnitude. Each EU member state is establishing national supervisory authorities responsible for enforcement.

GDPR Interaction

The AI Act operates alongside the General Data Protection Regulation, creating a dual regulatory layer. Biometric data — which includes the facial geometry, voice patterns, and behavioral characteristics used to create AI twins — is classified as special category data under GDPR, subject to the strictest processing requirements. Explicit consent is required. Purpose limitation applies. Data subjects retain rights to erasure, which has implications for models trained on personal biometric data.

The interaction between the AI Act’s transparency requirements and GDPR’s data protection framework creates compliance complexity for platforms operating across the EU. Platforms must simultaneously disclose that content is AI-generated while protecting the underlying biometric data used to create it.

United States: A State-by-State Patchwork

The United States has no federal legislation specifically addressing AI digital identity. Instead, a patchwork of state laws, existing intellectual property frameworks, and proposed federal legislation creates an uneven regulatory landscape.

Tennessee: The ELVIS Act

Tennessee’s Ensuring Likeness Voice and Image Security (ELVIS) Act, enacted in 2024, is the most targeted state legislation addressing AI identity replication. The law specifically extends right-of-publicity protections to cover AI-generated replications of voice and likeness. It applies to both living and deceased individuals and creates a specific cause of action for unauthorized AI cloning.

The ELVIS Act is notable for its explicit recognition that AI replication constitutes a distinct category of identity exploitation — not merely an extension of traditional impersonation or unauthorized use of likeness. This framing has influenced subsequent legislative efforts in other states.

California: AB 2602 and AB 1836

California, home to both the entertainment industry and the technology industry, has addressed AI identity through multiple legislative channels. AB 2602 addresses the use of digital replicas in entertainment contracts, requiring that performers give informed consent to the creation of AI replicas and limiting the scope of blanket consent provisions. AB 1836 extends personality rights protections posthumously, addressing the use of deceased individuals’ likenesses in AI-generated content.

These laws are significant because California’s entertainment industry generates a disproportionate share of the commercial activity around AI human replications. SAG-AFTRA’s 2023 strike and resulting agreement on AI usage established contractual frameworks that California law now reinforces.

New York

New York has proposed legislation expanding its existing right of publicity to cover AI-generated replications. The state’s existing Section 50/51 of the Civil Rights Law, which provides criminal and civil penalties for unauthorized commercial use of a person’s name, portrait, or picture, is being amended to explicitly include AI-generated synthetic media.

Federal Proposals

Several federal bills have been introduced, including the NO FAKES Act, which would create a federal right to control AI-generated replicas of one’s voice and likeness. The bill would provide both criminal penalties and a private right of action, and would apply to both living and deceased individuals with a 70-year posthumous protection period. As of March 2026, no federal legislation has been enacted, though industry observers consider passage of some form of federal AI identity legislation likely within the next 12-18 months.

Section 230 Implications

A critical unresolved question is how Section 230 of the Communications Decency Act — which shields platforms from liability for user-generated content — applies to AI-generated content. If a platform’s AI system generates deepfake content of a real person, is the platform shielded by Section 230? Courts have not yet provided definitive guidance, but the trend in recent cases suggests that content generated by AI systems may not qualify for Section 230 protection because it is not “content provided by another information content provider.”

United Kingdom

The UK’s approach to AI digital identity regulation operates through a combination of existing intellectual property law, data protection (UK GDPR), and the Online Safety Act.

The UK does not have a standalone right of publicity equivalent to US state laws. Instead, protections against unauthorized use of identity are addressed through the tort of passing off, privacy law (specifically misuse of private information), and data protection. The Online Safety Act, fully operational in 2026, addresses harmful deepfake content — particularly non-consensual intimate imagery — with criminal penalties.

The UK’s AI regulatory framework, articulated through the government’s AI Regulation White Paper, takes a sector-specific approach rather than the EU’s horizontal framework. This means that AI digital identity regulation is handled by existing sectoral regulators: the Financial Conduct Authority for financial services applications, Ofcom for communications and media, the Information Commissioner’s Office for data protection.

This approach creates flexibility but also fragmentation. A creator deploying an AI twin across multiple sectors may need to navigate multiple regulatory frameworks simultaneously.

Asia-Pacific Approaches

China

China has implemented the most prescriptive AI identity regulations in Asia. The Deep Synthesis Provisions, effective January 2023, require explicit consent for creating synthetic media of any identifiable person, mandatory labeling of all AI-generated content, real-name verification for users of AI synthesis platforms, and content review mechanisms to prevent harmful deepfakes.

China’s regulations are particularly relevant to the AI digital twin economy because the country’s livestream commerce market — the largest in the world — is a primary deployment environment for AI presenters. The Three Sheep Group’s involvement in the Khaby Lame deal underscores the cross-border regulatory complexity when AI identity assets are deployed in Chinese commerce.

Singapore

Singapore has taken a principles-based approach through its Model AI Governance Framework. Rather than prescriptive legislation, the framework establishes guidelines for AI deployment that emphasize transparency, accountability, and human oversight. The Personal Data Protection Act (PDPA) governs the collection and use of biometric data, requiring consent and purpose limitation.

Singapore’s approach is attractive to AI twin platforms because it provides regulatory clarity without the compliance burden of the EU AI Act. Several AI companies have established Asia-Pacific headquarters in Singapore for this reason.

South Korea

South Korea enacted the Deepfake Regulation Act, which criminalizes the creation and distribution of non-consensual deepfakes and requires platforms to implement detection and removal mechanisms. The law is notable for its focus on enforcement — South Korea has actively prosecuted deepfake cases, unlike many jurisdictions where laws exist but enforcement is minimal.

UAE and Middle East

The UAE has positioned itself as an AI-friendly jurisdiction through initiatives like the AI Office and regulatory sandbox programs. Dubai’s DIFC (Dubai International Financial Centre) has established a framework for AI governance that includes provisions for digital identity and synthetic media. The approach emphasizes enabling innovation while requiring transparency and consent.

The UAE’s relevance is growing because the Rich Sparkle-Khaby Lame deal identified the Middle East as a primary target market for AI twin commerce. As more AI identity transactions target the region, the UAE’s regulatory framework will become increasingly significant.

Cross-Border Challenges

The most significant legal challenge facing the AI digital identity industry is not any single jurisdiction’s regulations but the interaction between them.

An AI twin created in the United States using data collected under California law, deployed on a platform headquartered in Europe under the AI Act, generating content viewed in China under the Deep Synthesis Provisions, and driving commerce in the UAE under DIFC regulations, must simultaneously comply with all applicable frameworks. There is no harmonization mechanism, no mutual recognition agreement, and no international standard for AI identity rights.

This creates several practical challenges. Consent fragmentation means consent obtained under one jurisdiction’s standards may be insufficient under another’s. Labeling inconsistency means the format, content, and placement of AI content disclosures varies across jurisdictions. Data sovereignty conflicts arise when biometric data collected in one jurisdiction is processed or stored in another with different protection standards.

For platforms building identity vault infrastructure, cross-border regulatory compliance is a core product requirement — not an afterthought. The platforms that build jurisdictional awareness into their architecture will have significant competitive advantages.

Implications for Creators and Platforms

The current regulatory environment creates both obligations and opportunities.

For creators, the emerging legal framework strengthens your rights over your identity. Personality rights are expanding to cover AI replications. Consent requirements are becoming more stringent. Transparency obligations mean audiences will know when content is AI-generated. The practical implication: document everything. Record consent explicitly. Maintain control over your biometric data. Use platforms that align with the strongest regulatory standards, because compliance infrastructure built today becomes a competitive advantage as enforcement increases.

For platforms, the regulatory landscape is a forcing function for responsible AI development. Platforms that build consent verification, watermarking, transparency labeling, and audit capabilities into their core architecture are not just complying with current law — they are building defensible market positions. Platforms that treat compliance as an afterthought will face increasing legal liability, customer attrition, and potential market exclusion.

For investors, regulatory risk is real but manageable. The direction of regulation is clear: toward stronger identity protections, mandatory transparency, and consent requirements. Companies aligned with this direction have lower regulatory risk and stronger enterprise appeal. Companies that depend on regulatory ambiguity for their business model face existential risk.

The legal framework for AI digital identity will continue evolving rapidly. By 2028, most major economies will have specific legislation addressing AI identity replication. The companies and creators who build their operations on the strongest available protections today will be best positioned when universal standards emerge.


This analysis is for informational purposes and does not constitute legal advice. Regulatory frameworks described are based on publicly available legislation and may be subject to amendments, implementing regulations, and judicial interpretation.