What Is a Deepfake?
A deepfake is a piece of synthetic media — most commonly video or audio — generated by artificial intelligence to realistically depict a real person performing actions or speaking words that they never actually performed or spoke. The term combines “deep learning” (the AI technique used to create them) and “fake” (the deceptive intent). Deepfakes emerged as a widespread concern in 2017 when the underlying generative adversarial network (GAN) technology became accessible enough for non-experts to produce convincing face-swap videos.
The defining characteristic of a deepfake, distinguishing it from legitimate applications of the same underlying technology, is the absence of consent. The same AI systems that produce authorized AI clones and AI digital twins for commercial use can also produce deepfakes when deployed without the subject’s knowledge or authorization.
Technical Methods
Deepfakes are primarily created using three approaches: face-swapping (replacing one person’s face with another in existing video), face-reenactment (manipulating a person’s facial expressions to match a different source), and full synthesis (generating an entirely new video of a person from scratch). Voice deepfakes — also called voice clones when unauthorized — use similar generative AI techniques to replicate a person’s vocal characteristics from audio samples.
The quality and accessibility of deepfake technology have improved dramatically. Systems that required hours of training data in 2020 can produce convincing results from minutes of source material in 2026. This democratization of the technology has made deepfake detection a critical industry need.
Detection and Defense
The deepfake detection market has grown in response to the threat, with companies such as Sensity AI, Reality Defender, and Intel’s FakeCatcher developing identification tools. Detection methods include analyzing micro-expression inconsistencies, examining lighting and reflection patterns, and identifying compression artifacts unique to AI generation. However, detection remains an adversarial arms race — as detection improves, generation techniques adapt to evade it.
Biometric sovereignty frameworks and identity vault infrastructure represent a proactive defense strategy, giving individuals verifiable control over their digital identity rather than relying solely on reactive detection.
Related Terms
See also: AI Clone, AI Likeness, Biometric Sovereignty, Personality Rights