Generative AI creates faces that do not belong to anyone. Or, more precisely, faces whose ownership is legally undefined. A GAN-generated portrait, a diffusion model’s synthesized executive, a real-time avatar assembled from thousands of training images — these synthetic identities exist in a legal vacuum that the current personality rights framework was never designed to address.
This is not an abstract philosophical question. It has immediate commercial implications. Companies are deploying AI-generated spokespeople for marketing campaigns. Creators are building audiences around virtual characters. Platforms are generating synthetic influencers for brand partnerships. And in every case, the fundamental question remains unanswered: who owns a machine-made face?
The Ownership Problem
Traditional personality rights law — the right of publicity — is anchored to natural persons. A real human being owns their likeness, voice, and identity. This right exists because the person exists. The framework collapses when the identity in question was generated by a machine and never belonged to any human.
Three scenarios illustrate the complexity.
Scenario 1: AI-generated face from real person’s data. A company creates an AI avatar trained on a specific employee’s face and voice. The employee leaves the company. Who owns the AI avatar? Under current law, the employee retains personality rights over their likeness. The company may have contractual rights depending on employment agreements. The AI platform provider may claim rights over the generated output. Multiple parties have competing claims.
Scenario 2: Entirely synthetic face. A company generates a completely synthetic spokesperson — a face that does not correspond to any real person. The company invests in building brand recognition around this synthetic character. Under current law, the synthetic character has no personality rights (it is not a person). The company may protect it through trademark and copyright, but these frameworks were not designed for identity protection.
Scenario 3: AI-generated face that resembles a real person. A generative model produces a face that, coincidentally or through training data influence, resembles a specific real person. That person discovers their apparent likeness being used in commercial content they never authorized. Under personality rights law, the real person may have a claim — but proving that an AI system’s output is derived from their specific data is technically and legally challenging.
The Legal Framework Gap
The existing legal frameworks each address part of the problem but none address it completely.
Personality rights (right of publicity) protect real people’s identities but do not apply to synthetic characters. The right terminates at death in some jurisdictions and persists post-mortem in others, creating additional complexity for posthumous AI twins.
Copyright law protects creative works but the US Copyright Office has ruled that AI-generated content without sufficient human creative input is not copyrightable. A purely AI-generated face may not qualify for copyright protection.
Trademark law can protect commercial identifiers, including character designs. Companies can register AI-generated brand characters as trademarks, providing commercial identity protection through trademark rather than personality rights.
Data protection law (GDPR, CCPA) grants individuals rights over their personal data, including biometric data used to train AI models. If a generative model was trained on a person’s images without consent, data protection claims may apply to the outputs.
The Consent Architecture
The most actionable element of the identity ownership problem is consent. For AI-generated identities based on real people, consent architecture determines who has authorized what.
The biometric sovereignty framework establishes that individuals should control the AI use of their biometric data. This principle applies clearly when a specific person’s data is used to generate an AI identity. It applies less clearly when a generative model produces outputs influenced by thousands of training subjects, no single one of whom is identifiable in the output.
AI avatar platforms are addressing the consent gap. Resemble AI has built consent verification into its voice cloning process. HeyGen requires consent documentation for custom avatar creation. D-ID provides consent management tools for organizations creating avatars from employee or talent data.
Commercial Implications
The identity ownership question has direct commercial consequences for several stakeholders.
Companies deploying AI spokespeople need clear ownership and usage rights documentation for their AI-generated brand characters. Without this, competitors could argue the synthetic character is unprotectable, or real people could claim resemblance.
Creators building virtual characters need legal frameworks that protect their creative investment in synthetic identities. Trademark registration provides some protection, but the law has not fully addressed the nuances of AI-generated identity assets.
Platforms generating synthetic media face potential liability if their outputs infringe on real people’s personality rights. Terms of service, consent documentation, and output monitoring are risk management essentials.
Investors evaluating AI identity companies need to assess the legal risk profile of businesses built on AI-generated identity assets that may face ownership challenges.
Path Forward
The legal framework for AI-generated identity will develop through three mechanisms: legislative action (new laws specifically addressing synthetic identity rights), judicial precedent (court decisions in pending cases), and industry self-regulation (platform policies and consent frameworks).
The companies best positioned in this uncertain landscape are those building robust consent architecture, maintaining clear provenance records for all AI-generated identity assets, and engaging proactively with the regulatory process.
For the legal framework underlying this analysis, see our personality rights coverage and legal framework analysis.