GDPR Compliance for AI-Generated Synthetic Media and Deepfakes
Synthetic faces now deliver advertisements, not just in the EU, but across the world. AI-generated voices handle customer support calls. Hyper-realistic deepfake videos circulate online, sometimes as satire, sometimes as political messaging, and sometimes as outright impersonation. Voice cloning scams have targeted corporate executives. Marketing teams deploy synthetic influencers. Film studios recreate performances without traditional filming. What began as experimental technology has quickly become part of the digital mainstream.
The unease surrounding this shift is not driven by science fiction, but by identity. Synthetic media may be artificial, yet it often replicates real people — their faces, voices, or biometric traits. A generated image may resemble an identifiable individual. A cloned voice may be built from real recordings. An AI-produced video may place words into the mouth of someone who never spoke them. That is where regulatory tension begins.
Under the General Data Protection Regulation, personal data includes any information relating to an identified or identifiable natural person. The fact that content is generated by an algorithm does not automatically remove it from legal scrutiny. If synthetic output can be linked to a real individual, questions arise immediately: when does it qualify as personal data? When does it become biometric data? Who is legally responsible for its creation and deployment? What lawful basis could justify replicating someone’s likeness or voice?
These questions are becoming more urgent as the EU’s AI Act enters into force and regulatory scrutiny of biometric and profiling technologies intensifies. This article answers all these questions, hence enabling you to remain GDPR compliant in the midst of all this tech advancement.
When Does the GDPR Apply to AI-Generated Synthetic Media?
Under Article 2(1), the GDPR applies to the processing of personal data. That means two elements must be present: processing and personal data. Synthetic media only triggers the Regulation where both conditions are satisfied.
Where Synthetic Media Involves Processing of Personal Data
Article 4(2) defines processing broadly. It includes collection, recording, organisation, structuring, storage, adaptation, retrieval, consultation, use, disclosure by transmission, dissemination, alignment, restriction, erasure, or destruction of personal data. The definition is intentionally expansive to capture nearly every operation performed on data, whether automated or not.
In the context of synthetic media, processing can occur at multiple stages:
- When training data is collected and used to train a generative model
- When real images, videos, or voice recordings are uploaded into a system
- When an AI model adapts, modifies, or reconstructs identifiable features
- When the resulting output is published, distributed, or stored
Even if the final output is newly generated, the upstream use of real individuals’ images, voices, or biometric patterns may already constitute processing. The automation of the system does not remove the human or organizational decision-making behind its deployment. If an entity determines why and how personal data is used within the generative process, processing is taking place.
The second element is equally critical: the output must relate to a natural person. Personal data under Article 4(1) covers any information relating to an identified or identifiable natural person. A synthetic video that clearly depicts a real politician, executive, or private individual — even if the speech is fabricated — relates to that person. The artificial nature of the content does not negate the relational link.
If the generated media is connected to a real person’s identity, characteristics, or biometric traits, it falls within the conceptual scope of personal data. At that point, the GDPR framework is engaged.
The GDPR Applies Where a Natural Person Is Identifiable — Even Indirectly
Identifiability is not limited to naming someone explicitly. A person may be identified directly, for example, where a synthetic video labels an individual by name or clearly reproduces their recognizable face or voice.
However, identifiability also exists indirectly. A person can be identifiable through contextual elements such as role, appearance, distinctive mannerisms, metadata, or a combination with other available information. A deepfake of “the CEO of a major telecom company speaking at a 2025 earnings call” may allow identification even without naming the individual. If a reasonable person, using means likely to be used, can single out the individual, the threshold is met.
Importantly, the fact that source material is publicly available does not remove GDPR protection. Images scraped from public social media profiles, speeches from public conferences, or interviews posted online remain personal data. Public accessibility does not convert personal data into unregulated material. The Regulation applies irrespective of whether the data was initially obtained from private or public sources.
This is particularly significant for synthetic media systems trained on publicly available datasets. The argument that “the data was already online” does not eliminate obligations concerning lawful basis, transparency, or data subject rights. Identifiability remains the decisive factor.
The GDPR Does Not Apply Where Synthetic Content Is Fully Fictional and Non-Identifiable
There is, however, a boundary.
If synthetic media is entirely fictional and does not relate to any identified or identifiable natural person, the GDPR does not apply. A completely artificial face generated without reference to real individuals, a fictional avatar with no resemblance to any existing person, or a synthetic voice not derived from identifiable recordings would fall outside the Regulation — provided no individual can be singled out, directly or indirectly.
The key test is not realism, but identifiability. Hyper-realistic content can still be outside GDPR scope if it does not correspond to any real person and cannot reasonably be linked to one. Conversely, even subtle imitation may trigger the Regulation if identification is possible.
When Does Synthetic Media Trigger Special Category Processing?
After establishing that synthetic media involves personal data, this next question is even more consequential. This is because, when processing escalates to biometric data, Article 9 is triggered. This essentially means the default rule shifts from “processing is permitted with a lawful basis” to“processing is prohibited unless a specific exception applies.” So, synthetic media triggers biometric processing when:
It Is Used to Uniquely Identify a Natural Person
Article 4(14) GDPR defines biometric data as:
“personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that person.“
Several elements must be satisfied simultaneously.
First, there must be specific technical processing. Synthetic media systems typically rely on algorithmic analysis of facial geometry, vocal patterns, micro-expressions, or behavioural markers. Facial mapping, voiceprint extraction, and feature encoding are all forms of technical processing within the meaning of the Regulation.
Second, the data must relate to physical, physiological, or behavioural characteristics. Facial structure, iris patterns, voice frequency and cadence, and even distinctive speech rhythms fall squarely within these categories. A voice clone built from training samples does not merely reproduce sound — it encodes measurable biometric traits.
Third — and most important — the processing must allow or confirm the unique identification of a natural person.
This is where synthetic media becomes legally sensitive.
- If a system replicates a specific individual’s face to authenticate identity in a verification system, it is using biometric characteristics for identification.
- If a cloned voice is used to pass voice-based authentication in banking or corporate security systems, it engages biometric identification.
- If facial replication is used to match a synthetic image against a real database of individuals, the identification threshold is clearly met.
In these circumstances, the system is engaging in biometric processing because it technically processes unique characteristics capable of singling out a person.
However, realism alone is insufficient. A synthetic face that resembles a person but is not used to identify or verify that person does not automatically constitute biometric data under Article 4(14). The identification function is the decisive factor.
Article 9 Special Category Processing Is Triggered Where the Biometric Data Is Used for Identification
Article 9 includes “biometric data for the purpose of uniquely identifying a natural person” among categories of data that are, in principle, prohibited from processing unless one of the Article 9(2) exceptions applies.
Not all biometric data falls under Article 9. Only biometric data processed for the purpose of uniquely identifying a person triggers the special category regime.
If a generative AI system processes facial vectors to verify whether a person matches a stored identity profile, the purpose is identification. Article 9 is triggered. The controller must then rely on one of the limited exceptions, such as:
- Explicit consent
- Substantial public interest under EU or Member State law
- Legal claims
- Employment or social security law grounds
Absent such an exception, processing is unlawful.
In the context of synthetic media, Article 9 may be engaged where:
- Voice cloning is deployed in authentication systems
- Synthetic avatars are linked to real individuals for identity verification
- Deepfake detection systems rely on biometric matching databases
- Generative tools encode identifiable biometric templates tied to named persons
Once the identification purpose exists, compliance obligations intensify. A lawful basis under Article 6 is no longer sufficient. Article 9 must also be satisfied.
This is a higher regulatory threshold with significantly increased enforcement risk.
When Article 9 is Not Triggered
If biometric characteristics are processed but not for the purpose of uniquely identifying a person, Article 9 is not automatically triggered.
For example:
- A synthetic character generated using aggregated facial data that cannot be traced back to a specific individual.
- A voice model trained on multiple recordings, but not deployed to verify or authenticate identity.
- A deepfake created for parody that does not function within an identification system.
In such scenarios, the data may still qualify as personal data if a person is identifiable. It may even involve biometric traits in a descriptive sense. But unless the processing is carried out for identification purposes, the special category prohibition under Article 9 does not apply.
What Lawful Basis Is Required for Synthetic Media Processing?
Once synthetic media falls within the scope of the General Data Protection Regulation — because it involves processing of personal data — the next step is unavoidable: what lawful basis permits the processing?
Under Article 6(1), processing of personal data is lawful only if at least one of the listed grounds applies. There is no “AI exception.” The fact that content is generated by an automated system does not relax this requirement. The controller must identify and document a valid legal basis before collecting training data, generating synthetic outputs tied to individuals, or distributing that content.
Synthetic Media Processing Requires a Valid Article 6 Lawful Basis in All Cases
Article 6(1) provides six possible lawful bases:
- Consent
- Performance of a contract
- Compliance with a legal obligation
- Protection of vital interests
- Performance of a task carried out in the public interest or exercise of official authority
- Legitimate interests pursued by the controller or a third party
In commercial synthetic media deployments, the most realistic candidates are consent (Article 6(1)(a)) and legitimate interests (Article 6(1)(f)). Contract may apply in limited contexts, such as where an actor contractually agrees to digital replication. Public interest grounds may arise in limited governmental or journalistic settings.
The lawful basis must exist at each relevant stage of processing. Controllers cannot rely on a vague assertion that the output is “creative” or “innovative.” The GDPR requires legal grounding for the processing itself.
Importantly, the chosen lawful basis determines the compliance architecture — including transparency obligations, withdrawal rights, and documentation duties under Articles 13, 14, and 30.
Explicit Consent
Where synthetic media replicates a specific, identifiable individual — particularly their face, voice, or likeness — consent is often the most appropriate lawful basis.
Under Article 4(11), consent must be freely given, specific, informed, and unambiguous. Article 7 adds conditions concerning demonstrability and withdrawal. If the processing involves special category data (for example, biometric identification under Article 9), consent must be explicit.
Replication of a person’s likeness carries significant risks to dignity, reputation, and identity autonomy. The more precise and individualized the synthetic reproduction, the stronger the case that consent is required to ensure fairness and lawfulness under Article 5(1)(a).
For consent to be valid in this context:
- The individual must understand that their image, voice, or biometric traits will be used to generate synthetic content.
- The scope of use must be clearly defined (commercial advertising, internal simulation, entertainment, etc.).
- The individual must be able to withdraw consent at any time, and withdrawal must be as easy as giving it.
Blanket consent buried in platform terms is unlikely to satisfy GDPR standards where realistic digital replication is involved. Supervisory authorities assess consent strictly, especially where there is imbalance of power or significant impact on the data subject.
In high-risk synthetic replication, reliance on anything less than explicit, documented consent may expose controllers to serious enforcement risk.
Legitimate Interests
Article 6(1)(f) permits processing where it is necessary for the legitimate interests pursued by the controller or a third party, except where overridden by the interests or fundamental rights and freedoms of the data subject.
This requires a structured three-part balancing test:
- Purpose test — Is there a legitimate interest?
Examples might include fraud detection, security research, artistic expression, or technological development. - Necessity test — Is the processing necessary for that purpose?
Could the objective be achieved without replicating identifiable individuals? - Balancing test — Do the individual’s rights override the interest?
This includes considering the nature of the data, reasonable expectations, potential harm, and safeguards implemented.
In synthetic media cases, the balancing test is highly contextual. Replicating a public figure for clearly labeled satire may weigh differently from cloning a private individual’s voice for commercial exploitation. The more intrusive, realistic, or reputation-sensitive the output, the harder it becomes to justify reliance on legitimate interests.
Controllers must document this balancing assessment. Under Article 5(2), accountability requires demonstrable compliance.
Who Is the Controller or Processor in Synthetic Media Ecosystems?
Synthetic media is not created for its own sake. It is deployed in advertising campaigns, political messaging, internal corporate training, entertainment production, and increasingly in fraud schemes.
In all such scenarios, the GDPR focuses on who decided to use a person’s identity, likeness, or biometric traits in that context, rather than whoever created it.
Under Article 4(7), the controller is the entity that determines the purposes and essential means of processing. In synthetic media environments, that determination often occurs at the moment identity is intentionally deployed.
Control Arises Where Identity Is Deliberately Replicated
When a company launches a campaign using a synthetic spokesperson modelled on a real individual, that company determines:
- Why the likeness is being used (commercial persuasion),
- In what markets it will appear,
- How long it will circulate,
- Whether it will be modified or localized.
That entity is the controller for that deployment — even if the underlying generative tool was built elsewhere.
The same logic applies where:
- A studio recreates an actor’s digital performance,
- A political organization distributes AI-generated speeches,
- A business clones an executive’s voice for automation.
The controller is the entity that decides to operationalize identity.
Private Individuals Can Also Be Controllers Under the GDPR
The General Data Protection Regulation applies not only to companies, but to “natural or legal persons” who determine the purposes and means of processing (Article 4(7)). The Regulation does not distinguish between corporate and private actors at the definitional level.
The only structural limitation relevant to private citizens appears in Article 2(2)(c): the GDPR does not apply to processing carried out by a natural person “in the course of a purely personal or household activity.” This is commonly referred to as the household exemption.
However, this exemption is interpreted narrowly.
In Ryneš (C-212/13), the Court of Justice of the European Union held that a home CCTV system lost the household exemption because it captured part of a public street. The Court reasoned that once processing extends beyond the private sphere, it is no longer “purely personal.” The principle derived from that judgment is functional: public-facing processing does not benefit from the exemption merely because it is carried out by an individual.
Applied to synthetic media, this means:
- If a private individual creates and publishes a deepfake of an identifiable person on a public social media account,
- If an individual clones someone’s voice to conduct a fraud scheme,
- If someone distributes AI-generated content targeting a real person’s likeness or reputation to a broad audience,
the activity is unlikely to qualify as “purely personal.”
In those situations, the individual determines:
- The purpose of processing (e.g., satire, deception, harassment, financial gain), and
- The means of processing (selection of tools, choice of subject, method of dissemination).
Under Article 4(7), that individual meets the definition of a controller.
Model Providers and Platforms May Be Controllers in Parallel
Responsibility does not necessarily rest only with the person generating the content.
If a technology provider:
- Trains its system on identifiable faces or voices,
- Retains user-generated outputs for model refinement,
- Designs systems specifically optimized for realistic identity replication,
it may independently determine processing purposes.
In such cases, controller responsibility can exist at multiple layers. The individual deploying a deepfake may be a controller for dissemination. The platform hosting or monetizing that content may be a controller for distribution and amplification. The model developer may be a controller for training and system design.
Synthetic ecosystems frequently produce parallel or joint controllership, not singular responsibility.
Processor Status Is Narrow and Instruction-Bound
An entity qualifies as a processor only where it processes personal data strictly on documented instructions and does not determine its own purposes.
Cloud infrastructure providers hosting AI systems may fall into this category — but only if they do not reuse, analyze, or independently exploit the data.
The moment an entity shapes purpose — through analytics, model improvement, monetization strategies, or algorithmic prioritization — it may move into controller territory.
When Is a Data Protection Impact Assessment (DPIA) Required?
Under the GDPR, a Data Protection Impact Assessment (DPIA) is required whenever synthetic media processing crosses into “high risk” territory. Article 35(1) states that a DPIA is required where processing is “likely to result in a high risk to the rights and freedoms of natural persons.” The legal test is forward-looking and risk-based. The obligation is triggered not by harm already occurring, but by the likelihood and severity of potential harm.
Where Synthetic Media Processing Is Likely to Result in High Risk
The GDPR does not provide a closed definition of “high risk,” but it links the concept to impacts on fundamental rights — privacy, dignity, reputation, equality, and freedom from discrimination. Risk becomes “high” when processing could significantly affect individuals’ legal, economic, or social position, particularly where biometric data, profiling, or automated decision-making are involved.
Supervisory authorities across the EU — guided by the former Article 29 Working Party (now the European Data Protection Board) — have clarified that high risk is more likely where processing involves:
- Evaluation or scoring of individuals
- Automated decisions with legal or similarly significant effects
- Systematic monitoring
- Sensitive or biometric data
- Innovative or novel technologies
- Large-scale processing
Synthetic media systems frequently combine several of these criteria. For example, AI-driven facial reenactment tools that map real individuals’ facial features may involve biometric processing. Even where the output is “synthetic,” if it is generated from identifiable individuals, the processing falls within GDPR scope — and the risk assessment must reflect that.
The key point: if synthetic media could realistically expose individuals to identity misuse, reputational damage, manipulation, or discrimination, a DPIA becomes mandatory.
Large-Scale Biometric or Systematic Monitoring Through Synthetic Media Triggers Mandatory DPIA Obligations
Article 35(3) provides concrete examples where a DPIA is explicitly required. These include:
- Systematic and extensive evaluation based on automated processing
- Large-scale processing of special categories of data (including biometric data under Article 9)
- Systematic monitoring of publicly accessible areas
Synthetic media technologies can fall within these triggers when deployed at scale. If a platform generates, analyzes, or manipulates biometric facial or voice data of thousands or millions of individuals — even scraped from public sources — this may constitute large-scale biometric processing. Large-scale is assessed by considering the number of data subjects, volume of data, duration of processing, and geographical scope.
Similarly, where synthetic media tools are integrated into systems that monitor users — for example, identity verification systems using AI-generated facial comparison, or platforms that automatically flag manipulated media — the systematic nature of monitoring strengthens the obligation to conduct a DPIA.
Importantly, innovation itself increases regulatory scrutiny. Article 35 expressly mentions “new technologies.” Deepfake and generative AI systems qualify as novel and complex processing operations. When novelty combines with scale or biometric elements, a DPIA is no longer discretionary.
Deepfake Technologies Create Specific Risks That Strengthen the Case for Conducting a DPIA
Even where processing might not automatically meet Article 35(3) thresholds, the nature of deepfake technology often elevates the risk profile.
Manipulation risk arises where individuals’ likenesses are altered or fabricated in ways that distort reality. This can affect democratic participation, employment opportunities, or social standing.
Identity fraud risk is particularly acute when synthetic voice or facial models are used to impersonate individuals. If a system enables convincing identity simulation, the potential for financial fraud or unauthorized account access significantly increases the likelihood of “high risk.”
Reputational harm risk is central to many deepfake cases. Fabricated explicit, defamatory, or misleading content can cause irreversible personal and professional damage. Because the GDPR protects dignity and personal integrity as part of fundamental rights, such risks are legally relevant to the DPIA threshold analysis.
A DPIA in this context is not merely a formality. It must:
- Describe the intended processing operations
- Assess necessity and proportionality
- Identify risks to individuals
- Set out mitigation measures
If high risk remains after mitigation, Article 36 requires prior consultation with the relevant supervisory authority.
What Constitutes Synthetic Media and Deepfakes?
Synthetic media refers broadly to content generated, altered, or enhanced by artificial intelligence, typically through methods such as generative adversarial networks (GANs) or large language models. Deepfakes are a subset of synthetic media, distinguished by the ability to convincingly manipulate audio or visual data to present situations or statements that never occurred.
While the technology was initially seen as a novelty or comedic tool, its potential to impersonate individuals—often without their consent—has raised significant ethical and legal implications. From political misinformation to AI-generated pornography, the misuse of this technology can result in reputational damage, emotional distress, and privacy breaches of a serious nature.
The Concept of Personal Data in a Digital Cloak
One of the GDPR’s core tenets is the protection of personal data. Personal data refers to any information that can identify an individual, which includes not only names and addresses but also images, voice recordings, biometric data, and likenesses. Consequently, when synthetic media replicates someone’s appearance or voice, it potentially constitutes processing of personal data under GDPR.
The complexity arises in determining whether the generated content genuinely relates to a real, identifiable individual. Suppose an AI model creates a face that is statistically derived from thousands of real faces but does not correspond to a specific individual. In that case, the content may not fall under GDPR, as it is not “about” a real person. However, when a synthetic video clearly depicts a living person—whether in jest, deception, or tribute—the law treats this as processing personal data.
The courts and data protection authorities in Europe have reaffirmed that identifiable likenesses and vocal imprints qualify as personal data. This interpretation means that anyone developing or distributing synthetic media featuring real individuals must be aware of their obligations under GDPR, particularly about legal bases for processing and data subject rights.
Legal Bases and the Importance of Consent
Under GDPR, processing personal data requires a lawful basis. For synthetic media and deepfakes, consent may often be the most appropriate legal ground. Consent must be explicit, informed, freely given, and capable of being withdrawn. The challenge is that much synthetic content is created without the subject’s knowledge, let alone their consent.
For example, using an actor’s voice in a fictional narrative may be lawful with their permission. However, synthesising their likeness in a compromising or misleading context without consent could constitute a serious violation of GDPR, as well as other legal frameworks like defamation laws or image rights in some jurisdictions.
Organisations creating AI-generated content must rigorously assess whether consent is necessary and, if so, ensure it is documented appropriately. Even if creators claim that material is satirical or artistic, GDPR does not provide an exemption for artistic expression unless weighed carefully against individual privacy rights.
Legitimate interest, another lawful basis, is also ill-suited for synthetic media involving identifiable individuals. The balancing test required under legitimate interest weighs the data controller’s policy aims against the fundamental rights and freedoms of the data subject. Given the risks involved with misrepresentation or reputational harm, satisfying this test may be difficult.
Transparency and the Right to Be Informed
GDPR mandates that individuals must be informed about the collection and use of their personal data. Transparency is a fundamental pillar of accountability, and it is particularly important when the processing involves complex AI systems.
In the context of deepfakes, providing effective notice to individuals portrayed within such content is challenging. Many may never learn that their image or voice has been synthesised. Nonetheless, the duty to provide clear, accessible information about the purposes and legal basis of the processing still stands. Where synthetic content is disseminated publicly, creators or publishers may need to issue disclaimers or metadata disclosures that indicate the artificial nature of the media, along with contact information for data subjects to raise concerns.
Lack of transparency further elevates the risk of a breach, especially in media designed to deceive. Deepfakes purporting to convey genuine news or communication—such as fabricated political speeches—could become the subject of regulatory and possibly criminal scrutiny, depending on how closely they imitate real individuals.
Data Subject Rights in the Synthetic Landscape
GDPR empowers individuals with a suite of rights over their personal data. These include the rights to access, rectify, erase, restrict processing, and object to processing, as well as the right not to be subject to solely automated decision-making.
One key right that intersects with synthetic media is the “right to erasure,” commonly known as the right to be forgotten. If an individual learns that their likeness is being improperly used in synthetic media, they have the right to request deletion of that content, provided no overriding legal obligations exist to retain it.
Similarly, the right to object could be exercised if the synthetic media causes distress, reputational harm, or infringes on personal freedoms. Data controllers must have mechanisms in place to address these requests promptly and efficiently. With the viral nature of deepfakes and synthetic content, addressing these rights in a timely manner becomes not just a legal necessity but also a reputational safeguard.
The complexity of fulfilling data subject requests increases when the content is widely distributed, hosted across multiple platforms, or published anonymously. Controllers must ensure they maintain records of processing activities and maintain sufficient oversight over their AI tools to identify and trace the origin of synthetic content.
The Role of Data Protection Impact Assessments
When deploying new technologies that pose high risks to individual rights and freedoms—including synthetic media generation—data controllers are required to carry out a Data Protection Impact Assessment (DPIA). DPIAs are designed to identify, assess, and mitigate risks prior to the start of processing.
In the case of synthetic media, a DPIA should evaluate potential harms related to dignity, consent, reputation, and misrepresentation. It should also assess the effectiveness of safeguards such as content labelling, access controls, and user redress mechanisms. For platforms that allow the creation or sharing of AI-generated videos, a DPIA may reveal systemic risks, prompting the need for design changes or even policy limitations.
Furthermore, regulators may require prior consultation if the DPIA identifies a high risk that cannot be fully mitigated. For large platforms or high-profile users of synthetic media, this adds an additional layer of oversight that must be factored into development timelines and business models.
Building Ethical and Compliant Frameworks
Beyond legal compliance, organisations must consider the broader ethical implications of AI-generated content. The potential misuse of synthetic media to spread disinformation, commit fraud, or manipulate public opinion cannot be overlooked. While GDPR offers a vital legal framework, it is not a panacea for all issues associated with deepfakes.
Industry best practices are beginning to emerge, such as watermarking or cryptographically signing synthetic content to indicate its artificial origin. User verification measures and age restrictions can also help reduce the misuse of these tools, particularly where the content involves political figures, celebrities, or vulnerable individuals.
Collaboration between AI developers, policymakers, data protection experts, and civil society will be crucial in shaping a responsible ecosystem. Regulatory sandboxes and public consultations can create space for innovation while reinforcing privacy and human dignity at the system design level.
Enforcement Trends and Future Outlook
While GDPR has been in force since 2018, enforcement actions specifically related to synthetic media are still relatively limited. Yet, as the use of AI-generated content proliferates, it is only a matter of time before regulatory scrutiny intensifies. Complaints and investigations are already underway in several European jurisdictions where individuals have raised concerns about the misuse of synthetic likenesses.
Given the evolving nature of both the technology and the law, we may see guidance documents, codes of conduct, or even new legislative proposals aimed at tackling the unique aspects of AI-generated personal data. Eventually, the concept of AI transparency may require the inclusion of provenance data in all media files, essentially creating a traceable record of edits and origination—a kind of digital chain of custody.
For now, entities working in the frontier of synthetic media must remain vigilant. Establishing governance frameworks, appointing Data Protection Officers where appropriate, and keeping abreast of fragmented legal developments across EU member states will be foundational strategies for staying compliant.
Conclusion
Synthetic media powered by AI is reshaping the way we create and consume information. However, with great creative power comes an equally great responsibility to protect the dignity, identity, and autonomy of individuals. GDPR provides a robust legal architecture to ensure that the use of such technologies does not come at the expense of fundamental rights. Whether you are a developer, marketer, artist, or platform operator, understanding your obligations and embedding privacy-by-design into your processes is not just good practice—it is a legal imperative. The continued legitimacy and societal acceptance of synthetic media hinge on the ability to balance innovation with ethical governance and compliance.