GDPR and Facial Recognition: Privacy Implications and Legal Considerations
Facial Recognition Technology (FRT) is now widely used, not just in Europe, but across the world. It is in airports, office buildings, smartphones, and public security systems, among other places. Advances in artificial intelligence have made these systems faster, cheaper, and easier to deploy. As a result, facial recognition progressed from early manual systems in the 1970s to the widespread, AI-driven applications of today. In Europe, however, this expansion raised serious privacy concerns.
European data protection law treats facial recognition with particular caution. When facial images are used to identify a person, the data constitutes biometric data under the GDPR and falls within the Special Categories of Personal Data. This classification reflects the heightened risks it poses to an individual’s rights and freedoms.
Facial data is permanent and cannot be changed if misused or exposed. For this reason, the GDPR generally prohibits the processing of biometric data unless strict conditions are met as set out in Article 9. In parallel, the EU AI Act introduces further limits, including bans on untargeted facial ‘scraping’ from the internet and Real-Time Biometric Identification (RBI), with very few exceptions.
Regulators have begun to actively enforce these rules. Data protection authorities are taking action against organizations that collect or use facial images without a lawful basis, including hefty fines, ordering the deletion of biometric data and restricting the use of certain technologies. This means, in this regulatory environment, understanding how GDPR applies to facial recognition technology is no longer optional. This article explains the legal framework, highlights common compliance failures, and sets out the key considerations that apply to facial recognition systems today.
How GDPR Treats Facial Recognition
When Facial Recognition Becomes Biometric Data
Under the GDPR, a face does not automatically count as biometric data. The law makes a clear distinction between having an image of a face and using that image to identify a person. This distinction explains when facial data is allowed and when it becomes legally restricted.
In Recital 51, photographs are not biometric data by default. A facial image can be stored, viewed, or recorded without triggering special category rules. This includes ordinary photos, CCTV footage, or stored ID images, as long as no technology is used to recognise or verify identity. At this stage, the data is still personal data, but it is not biometric data.
Facial data becomes biometric data when it is technically processed for the purpose of identifying or authenticating a person (Article 4(14) GDPR). The key change happens when software analyses facial features, measures facial geometry, or creates a facial template that allows a face to be compared to other faces.
A facial template is created only to enable recognition or matching. Therefore, once a template exists, the data becomes biometric. It does not matter whether a name is attached or whether the data is pseudonymised. If the template can be linked back to a person using other available data, GDPR Article 9 applies. Pseudonymisation does not remove biometric status.
The same logic applies to facial databases. These systems are evaluated based on their capability, not their intention. If a database allows faces to be matched, searched, or compared in a way that can identify individuals, it qualifies as biometric processing. Identification does not need to happen immediately. The ability to make identification possible is enough.
Are there any exceptions?
Yes, facial data does not become biometric when processing has no identification purpose. Activities such as counting people, detecting the presence of a face, or analysing images in a way that cannot reasonably identify individuals fall outside biometric rules. The GDPR still applies, but the ‘special category’ threshold has not been crossed.
European authorities focus on systems that extract facial features and enable matching at scale. This interpretation is also reinforced by the EU AI Act, which treats identity-focused facial recognition as the highest-risk use.
Why Biometric Data Is “Special Category” Under GDPR
The GDPR classifies biometric data as a Special Category because of the risks it poses to individuals’ rights and freedoms. It sits alongside other highly sensitive data that carries serious risk if misused.
This includes:
- Health data,
- Genetic data,
- Racial or ethnic origin,
- Political opinions,
- Religious beliefs,
- And, data about a person’s sex life or sexual orientation.
This type of data is deeply linked to the individual as a human, and its exposure can lead to discrimination, exclusion, or lasting personal damage.
Biometric data is uniquely sensitive because it is both permanent and powerful. Unlike a password or a credit card, biometric identifiers cannot be changed. A face cannot be reset or replaced. Once compromised, the risk follows a person for life. This permanence is one of the core reasons why Article 9 of the Regulation imposes stricter rules on processing and treats biometric data as high risk.
In addition, biometric data enables precise identification at scale. Facial recognition allows tracking across locations, platforms, and time, which creates risks of mass surveillance, profiling, and loss of anonymity in public spaces. European regulators view such capabilities as a direct threat to fundamental rights, including privacy, freedom of movement, and freedom of expression.
Why Facial Recognition Is Often Unlawful Under GDPR
Facial recognition becomes unlawful under GDPR, not because it is prohibited outright, but because most organizations fail at three legal checkpoints.
a) Article 9’s Presumption of Prohibition and Special-Category Processing
GDPR Article 9 sets the starting rule for facial recognition. Biometric data used to identify a person is not allowed by default. The law presumes high risk and is hence prohibited, unless a strict exception applies. This is known as the presumption of prohibition. So, even if a controller can identify a lawful basis under Article 6, the processing remains unlawful unless an Article 9 exception is satisfied. This is to say, when it comes to biometric identification, Article 9 controls the analysis, whereby, without a valid exception, the processing is unlawful from the start.
The reason is risk. Biometric identification enables precise, lasting identification that can be used across time and locations. The law treats this as inherently high-impact processing, thereby flipping the burden, where the data controller must justify the use. Silence or convenience does not help.
The Clearview AI cases across France, Italy, Greece, and other European countries show how Article 9 operates in reality. Clearview AI built a facial recognition system by scraping billions of images from public websites and social media platforms. Those images were converted into facial templates and used by law enforcement to identify individuals without their knowledge or consent. Even though the system was accurate and innovative, regulators consistently emphasised that biometric identification was taking place without a valid Article 9 exception. As a result, the processing was unlawful from the outset, leading to hefty fines, deletion orders, and bans on further processing.
Article 9 allows biometric processing only in narrow situations. Explicit consent is one option, and substantial public interest based on law is another. Outside these, and a few other exceptions (covered below), biometric identification remains prohibited, even if the system works well. This is why facial recognition so often fails under GDPR. The law treats it as a last-resort technology, not a default tool.
b) Common Lawful Bases Don’t Suffice
Once Article 9 blocks processing, organizations often point to Article 6 lawful bases. This is where misunderstandings occur.
Legitimate interest
Legitimate interest is the most common fallback. It attempts to assert that the organization’s interest outweighs the impact on individuals. In facial recognition cases, this balance rarely succeeds. The intrusion is considered too severe, while the benefits are usually replaceable.
The GDPR does not focus on whether facial recognition is useful, but rather on whether it is strictly necessary. If identification can be achieved through alternative means, such as badges, passwords, staff checks, or manual systems, then biometric processing is considered excessive.
For example, in a Swedish school attendance case, a secondary school in Skellefteå replaced manual roll calls with facial recognition. The goal was efficiency and accurate attendance tracking. While these interests were recognised as legitimate, the Swedish Data Protection Authority did not consider them strong enough. Attendance could have been recorded using traditional methods, such as manual lists or ID checks. Because facial recognition was not strictly necessary, the legitimate interest basis failed.
Contractual necessity
Other Article 6 lawful bases fail for similar reasons. Contractual necessity rarely works because, under the law, a technology must be ‘objectively essential’ to provide a service. For example, a bank needs your name and address to open an account, but it does not strictly need your faceprint to function. Since there are almost always less invasive alternatives, such as using a physical ID card, a password, or a mobile app, scanning a face is viewed as a choice rather than a necessity. If a service can be delivered without biometrics, then contractual necessity cannot be used as the legal basis.
Legal obligation
Legal obligation is another common argument that usually fails for private companies. This legal basis only applies when a specific law commands an organization to process data in a certain way. While many businesses are legally required to maintain a safe environment or verify the age of customers, European laws almost never specifically require the use of facial recognition to achieve those goals. Because the decision to use a biometric system is a voluntary choice made by the business, they cannot claim they are simply “following a legal requirement” to justify bypassing the strict rules for sensitive data.
Regulators apply the same logic across sectors. Workplace access, retail security, and customer verification systems face the same problem. The law demands restraint where identification can be achieved by less intrusive means.
To put it simply, Article 6’s lawful bases were designed for ordinary data processing. Biometric identification, however, triggers an additional layer of protection. GDPR expects stronger justification, and in most real-world cases, that justification is missing.
c) The Problem With “Consent” in Facial Recognition
Article 9 recognises explicit consent as one of the exceptions for using biometric data. But while consent sounds safe and easier to obtain, under GDPR, it’s not that simple.
The Regulation sets a high standard for consent where biometric data is involved. It must be explicit, informed, freely given, and revocable at any time. All four conditions must be met. If one fails, the consent is invalid.
The main problem is freedom. Consent is not considered freely given where there is a power imbalance between the individual and the entity or organization seeking consent. This includes schools, workplaces, housing, and access to essential services. In these settings, refusal may carry consequences, so GDPR does not treat such consent as genuine.
This was central to the Swedish school attendance case. Students and parents were informed, and participation was described as voluntary. However, the Swedish Data Protection Authority (IMY) rejected this argument, arguing that students are dependent on the school. Saying “no” was not a realistic option. Because the relationship was unequal, the consent was invalid.
Consent also fails when the user has no genuine alternative. The GDPR requires that consent must be freely given, which means people must have a real choice to say “no” to facial recognition and still access the service they want. If, for instance, the only practical way to enter an office building, board a flight, or use an app is by scanning a face, then that consent is considered forced. This design choice legally invalidates the consent because the individual has no leverage to refuse. To comply with the law, organizations must offer an equivalent, non-biometric option, such as a key card, a traditional ID check, or a password, that works just as easily as the facial scan. Without a clear and simple alternative, the legal basis of consent cannot be relied upon.
Information quality matters as well. Consent must be specific and understandable. Long privacy notices, technical language, or vague descriptions of future use undermine validity. Facial recognition systems often fail to explain how biometric templates are created, stored, or reused. Incomplete information means invalid consent.
Withdrawal is another weak point. Consent must be as easy to withdraw as it is to give. Many biometric systems do not allow simple withdrawal without losing access or functionality. If withdrawal leads to disadvantage, the consent was never freely given.
Simply put, consent fails because facial recognition changes the balance of control. Individuals are asked to agree to irreversible identification in situations where refusal is difficult or costly. GDPR treats that reality seriously.
Limited Exceptions and the Path to Lawful Use
Narrow Article 9 Exceptions That May Apply
GDPR Article 9 allows processing of biometric data, but only through a closed list of exceptions. These exceptions are interpreted strictly. If an exception does not clearly apply, the processing is unlawful, regardless of the purpose, efficiency, or technology. The exceptions are as follows:
a) Explicit consent
This is the most cited exception, but it’s also frequently rejected because organizations usually get it wrong. First, explicit consent is a higher standard than ordinary consent. Silence, pre-ticked boxes, or implied agreement simply don’t qualify. European authorities across the board require a clear, positive action that shows an individual has clearly agreed to facial recognition. The EDPB has said that consent for biometric data must be given separately and for a specific purpose, and that agreeing to general terms of service is not valid consent for facial recognition.
Second, consent must be strictly purpose-bound. Facial recognition systems often expand over time. A system introduced for access control may later be reused for attendance tracking, analytics, or behavioural monitoring. Consent given for one biometric purpose does not extend to another, as each new use requires fresh, explicit consent. This point frequently emerges during audits, where authorities examine how systems evolved after deployment rather than how they were originally described.
Third, consent cannot be used to legitimize disproportionate processing. Even valid consent does not override GDPR’s core principles. Authorities still assess data minimisation, storage limitation, and necessity. In practice, this means that “they agreed” is not a defence if the system collects more biometric data than required, keeps it too long, or uses it in a high-risk way. Consent does not neutralise risk.
Fourth, consent collapses where dependency exists in practice, not just in theory. Regulators look beyond policy documents and assess how alternatives work in real life. If the non-biometric option is slower, stigmatizing, inconvenient, or visibly marks refusal, consent may still be invalid. Several authorities have noted that a theoretical alternative does not rescue consent if real-world pressure remains. The analysis focuses on lived experience, not formal design.
Finally, consent is fragile over time. Facial recognition systems are long-term by nature, but consent is not permanent. Organizations must be able to show that consent remains informed, specific, and valid throughout the system’s lifecycle. Changes in vendors, algorithms, data sharing, or risk profile can silently invalidate earlier consent. Many investigations identify failure at this stage, where consent is treated as a one-time checkbox rather than an ongoing legal condition.
b) Employment and Social Protection (Article 9(2)(b))
In limited situations, employment or social protection law may require identity verification at a very high level of certainty. In those cases, GDPR allows biometric data to be used, but only if strict conditions are met.
The key requirement is legal necessity. So, facial recognition is allowed under this exception only where national law requires or expressly permits biometric processing and provides appropriate safeguards to meet an employment, social security, or social protection obligation. The choice must come from the law, not from the employer.
In reality, this exception can work only when three elements align:
- There must be a clear legal obligation or right under national law. For example, a law may require secure identification to protect access to sensitive facilities, critical infrastructure, or high-risk environments. If the law merely requires accurate records or general security, the exception does not apply.
- Secondly, biometric identification must be strictly necessary to meet that legal obligation. Authorities ask a simple question: could the same legal goal be achieved realistically without facial recognition? If the answer is yes, the exception fails.
- Thirdly, the use of facial recognition must be limited and safeguarded. Even where law and necessity align, GDPR still requires strong protections for workers. This includes purpose limitation, minimal data collection, short retention periods, strict access controls, and effective oversight. Facial recognition must remain exceptional, not normalised.
In the UK Serco Group case, the company argued that biometric systems were needed to meet employment obligations. However, while accurate attendance records were required, no law required biometric identification to achieve that goal. So, needless to say, the ICO rejected that argument. Alternative methods existed, so facial recognition was not legally necessary. Therefore, the system was stopped, and the data had to be deleted.
Across Europe, authorities apply the same standards. Employment law often sets outcomes, i.e., safety, fairness, and accurate records, but it rarely dictates biometric methods. Where the law is silent on biometrics, organizations cannot fill the gap with their own technology choices.
National law also matters. Article 9(2)(b) only works where domestic legislation clearly supports biometric use and provides safeguards. In countries like the Netherlands, with explicit workplace biometric legislation, a narrow path may exist. However, where national law does not speak clearly, reliance on this exception becomes extremely difficult.
So, outside all these conditions, reliance on Article 9(2)(b) is unlikely to withstand regulatory scrutiny.
c) Substantial Public Interest (Article 9(2)(g))
The substantial public interest exception is mainly used by public authorities. Private organizations sometimes rely on it, but only in limited and legally defined situations. The name sounds broad, but the legal test is strict.
This exception starts with law, not intention. Facial recognition may be allowed only where national or EU law clearly says that biometric processing may be used for a specific public interest purpose. The public interest must be written into law. It cannot be declared by the organization itself.
The interest must also be “substantial,” meaning it must address a serious issue for society, not just a convenience for a business. For example, laws might allow biometrics to prevent organized crime or to secure national borders. However, things like improving store efficiency, making check-ins faster, or saving on staffing costs never qualify.
Necessity comes next. Facial recognition must be necessary to achieve that public interest. Authorities ask whether the public interest goal can realistically be met without biometric identification. If traditional checks, ID verification, or targeted controls would work, facial recognition fails. Safeguards are also very essential. The law must include strong safeguards. These are mandatory rules that, among other things, limit how long the data is kept, who can see it, and how it is protected. Facial recognition must be targeted, not general. Open-ended or mass use undermines the exception.
Generally, this exception operates in clearly regulated statutory environments, such as border management, immigration control, or other public security functions carried out under specific national or EU legislation. Where processing is conducted by competent authorities for criminal law purposes, it typically falls under the Law Enforcement Directive rather than the GDPR. In either framework, biometric use must be expressly authorised by law, proportionate, and subject to strict safeguards.
d) Vital Interests (Article 9(2)(c))
The vital interests exception is the narrowest of all GDPR exceptions. It exists for emergencies, making it more of a last-resort safety valve. This exception applies only where facial recognition is used to protect someone’s life or prevent serious physical harm, and where consent or any other Article 9 exception is not realistically available
The key condition is incapacity. The person whose biometric data is processed must be physically or legally unable to give consent. Think of unconscious patients, victims of serious incidents, or individuals facing immediate, life-threatening danger. Again, necessity matters. Facial recognition must be genuinely needed to protect life. If identity could be confirmed through documents, relatives, or manual checks, the exception does not apply.
This exception also assumes urgency. It is meant for moments where delay creates real risk. Planned, routine, or ongoing biometric systems cannot rely on vital interests, nor can processing aimed at organizational, investigative, or general public objectives.
The Real-World Compliance Test: DPIAs, Safeguards, and Audit Trails
Why Facial Recognition Almost Always Triggers a Mandatory DPIA
Under the GDPR, a Data Protection Impact Assessment (DPIA) is required whenever processing is likely to result in a high risk to people’s rights and freedoms. As set out in Article 35, when those risk conditions are met, a DPIA is not optional. Facial recognition meets that threshold.
First is the type of data involved. Facial recognition uses biometric data to identify individuals. Biometric data is classed as special-category data under GDPR. The law treats this type of data as highly sensitive because it is unique, permanent, and cannot be changed if misused. So, large-scale or repeated processing of biometric data is one of the clearest triggers for a DPIA under Article 35.
Secondly, is how the technology operates. Facial recognition systems work by automatically scanning, comparing, and identifying people. This is a form of automated and systematic monitoring. GDPR specifically treats systematic monitoring of individuals as high-risk processing, especially when it happens in workplaces, schools, public spaces, or other environments people cannot easily avoid.
Thirdly, there is risk accumulation. Facial recognition usually combines several risk factors at once: sensitive data, automation, scale, and impact on individual rights. When these factors appear together, GDPR expects organizations to carry out a DPIA before deployment.
Therefore, because facial recognition involves sensitive biometric data, automated identification, and repeated monitoring, regulators expect a DPIA in nearly every real-world use case. Organizations must assess risks in advance, document them, and show how they will reduce those risks before the system goes live.
Operational Safeguards Regulators Are Looking For Today
Regulators do not just check whether an organization has a DPIA. They also check whether the systems, controls, and safeguards described in that DPIA are live and working in reality. For facial recognition, authorities expect stronger safeguards than for ordinary data systems because of the high risk involved. Here are the top safeguards expected:
a) Purpose Limitation and Minimal Scope
When regulators investigate facial recognition, one of the first things they examine is whether the system is still being used only for the purpose originally assessed. Many cases do not fail because facial recognition was unlawful from the start, but because it slowly expanded into new uses without a fresh assessment. A system introduced for access control may later be used for monitoring, analytics, or verification in new contexts. To avoid this, organizations are expected to lock the system to a single, clearly defined purpose and prevent reuse by design. If the purpose changes, even slightly, regulators expect a new risk assessment before any expansion. Facial recognition is tolerated only when it remains tightly contained.
b) Ongoing necessity checks.
Regulators do not accept that facial recognition is justified forever once it has been approved. Therefore, they will look at whether the system is still necessary at the time of use, not just when it was first deployed. Technology evolves quickly, and less intrusive alternatives may become viable over time. Organizations are therefore expected to periodically reassess whether facial recognition is still needed and document that review. Where necessity is not revisited, regulators treat the system as operating on outdated assumptions, which can undermine its legality even if the original deployment was lawful.
c) Evidence that less intrusive options were genuinely ruled out.
Even after legality is established, authorities continue to question whether facial recognition remains the least intrusive way to achieve the stated goal. Claims that other methods are inconvenient, slower, or less modern are not persuasive on their own. Regulators look for evidence, not preference. What this means is that organizations should be able to show whether alternatives such as ID cards, PINs, guards, or manual checks were tried or seriously considered and why they did not work in the real world. Where this evidence is missing, the authorities often conclude that facial recognition was excessive, even if it was legally permitted in principle.
d) Meaningful human involvement in decisions.
Facial recognition systems can make mistakes, and regulators treat those mistakes as a serious risk to individuals. Where system outputs lead to consequences, it’s expected that a real human decision-maker be involved before action is taken. This human involvement must be genuine. If staff consistently follow system results without questioning them, the authorities treat the process as effectively automated. Reviewers must be trained, understand the system’s limits, and have the authority to override it. Human review is meant to prevent harm, not just satisfy a formal requirement.
e) Accuracy monitoring and clear error thresholds.
Lawful use does not excuse poor system performance. Regulators increasingly look at whether organizations actively monitor error rates and understand how often the system gets things wrong. False positives and false negatives are issues that directly affect fairness and proportionality. Therefore, organizations are expected to define acceptable accuracy levels, monitor performance over time, and act when error rates increase. Where systems continue to operate despite known accuracy problems, this is viewed as a failure to control risk.
f) Data minimisation built into the system design
Regulators pay close attention to how facial recognition systems are designed, not just how they are described on paper. Systems that collect or store more biometric data than necessary are treated as higher risk by default. Designs that avoid storing raw images where possible, rely on biometric templates, and separate biometric data from names or identifiers are highly favoured. These design choices are treated as safeguards because they reduce harm if something goes wrong. Poor system architecture can undermine compliance even if policies look strong.
g) Strict retention limits with automatic deletion
Retention is one of the most consistently enforced areas. Regulators expect biometric data to be kept only for as long as it is genuinely needed for the specific purpose. Retention without a clear endpoint, or “just in case” storage, is treated as incompatible with necessity. What matters in practice is enforcement. Automatic deletion mechanisms are expected. Manual processes or informal practices are viewed as unreliable, especially given the sensitivity of biometric data.
h) Tight access controls and auditability.
Facial recognition data must be accessible only to a small number of authorised people. Regulators look closely at who can access biometric data and whether that access is properly logged. Audit logs are important because they allow regulators to verify that access is limited, purposeful, and accountable. Where organisations cannot show who accessed biometric data and why, this is treated as a failure of accountability, even if no data breach occurred.
Enforcement Reality: What Is Happening Today
Regulators in the EU and UK are actively enforcing the law. The trend shows that authorities are targeting real systems, penalising companies, and even exploring new legal fronts such as criminal complaints. These developments signal where enforcement is headed, and what organisations should take seriously.
a) Clearview AI: Repeated GDPR Violations Across Europe
Clearview AI is a US-based company that developed a facial recognition system by scraping billions of images from websites and social media platforms. These images were converted into biometric templates and made searchable by customers, including law enforcement agencies. Individuals whose images were collected were not informed, and no consent was obtained.
Between 2022 and 2024, several European data protection authorities took enforcement action against Clearview. France’s CNIL fined the company €20 million and ordered it to stop collecting and using biometric data of people in France and to delete existing data. Italy’s data protection authority issued a similar €20 million fine and deletion order. Greece imposed a fine of approximately €9 million and ordered Clearview to cease processing Greek residents’ data. In 2024, the Dutch Data Protection Authority imposed a €30.5 million fine, citing unlawful biometric processing, lack of transparency, and failure to comply with previous orders. Each authority concluded that Clearview lacked a valid legal basis under Articles 6 and 9 GDPR.
In the UK, the ICO fined Clearview £7.5 million and ordered it to stop processing UK residents’ data. While parts of that decision were later challenged on jurisdictional grounds, UK authorities maintained that the company’s activities fell within the scope of UK data protection law.
b) Austria’s 2025 Criminal Complaint: Escalating Enforcement
In 2025, the Austrian privacy organisation noyb filed a criminal complaint against Clearview AI with Austrian prosecutors. The complaint alleged continued unlawful processing of biometric data despite multiple enforcement actions across Europe and relied on Austrian law implementing GDPR provisions that allow criminal penalties for serious data protection violations.
Unlike administrative fines issued by data protection authorities, the complaint sought to trigger criminal investigation mechanisms. The filing argued that repeated non-compliance and continued biometric processing could justify criminal liability, potentially extending to individuals responsible for the company’s operations if prosecutors chose to pursue the case.
c) UK ICO: Facial Recognition Use Is Lawful Only With Strict Controls
The UK Information Commissioner’s Office has issued enforcement actions and formal opinions concerning the use of facial recognition technology, including by public authorities. The ICO has examined deployments involving police forces and other organisations, focusing on legality, necessity, accuracy, transparency, and governance controls.
In recent years, the ICO has required organisations to justify facial recognition use through detailed DPIAs and has warned that deployments lacking sufficient safeguards or clear justification may be unlawful. The regulator has emphasised that law enforcement use of facial recognition remains subject to UK data protection law and is not exempt from regulatory scrutiny.
What These Cases Signal About Regulator Priorities
Together, these cases reveal several clear enforcement priorities from European and UK regulators:
a) Large-scale biometric databases are now a central enforcement priority
The Clearview AI decisions across several EU countries and the UK show that regulators are giving particular attention to facial recognition systems built on large-scale biometric databases. Other than the technical compliance issues, authorities also focused on the very nature of the system itself: collecting and storing facial images at a massive scale. Even though the images were publicly available online, regulators treated the act of scraping, aggregating, and converting them into biometric templates as a serious intrusion. This means that this category of facial recognition is now viewed as inherently high risk, especially outside tightly controlled law-enforcement contexts. The repeated enforcement actions suggest that systems built around mass facial data collection are close to being unacceptable by default in civilian use.
b) Arguments based on consent or public availability are being firmly rejected
Another clear regulatory direction is the rejection of legal arguments based on the idea that images are public or that facial recognition serves general security or investigative interests. In the Clearview cases, regulators consistently dismissed claims that publicly accessible images could be reused freely for biometric identification. They also rejected broad appeals to public interest or crime prevention. This reflects a settled position that biometric data remains highly protected regardless of visibility, and that Articles 6 and 9 GDPR are applied strictly in this context. Regulators are signalling that facial recognition sits in a narrow legal space where creative or flexible interpretations of lawful basis are unlikely to succeed.
c) Failure to comply after enforcement is becoming a separate enforcement risk
The Austrian criminal complaint against Clearview AI points to a developing concern that goes beyond the initial illegality of a system. The focus here is on continued biometric processing after regulators have already ordered it to stop. Even though the outcome of the complaint is not yet known, the fact that criminal routes are being explored is significant. It shows growing regulatory frustration with organisations that ignore fines or corrective orders. This suggests that non-compliance itself may increasingly trigger escalation, potentially exposing companies and individuals to more severe consequences independent of the original GDPR breach.
d) Facial recognition is now treated as a maximum-risk technology
The UK ICO’s position reinforces the idea that facial recognition is no longer assessed like ordinary data processing. Regulators apply a much higher threshold, requiring strong proof of necessity and proportionality, detailed and robust DPIAs, strict accuracy controls, and safeguards against misuse or expansion. Even organizations with limited or well-intentioned use cases are expected to meet this elevated standard. Where they cannot, the authorities have shown little hesitation in intervening. This reflects a broader regulatory consensus that facial recognition presents exceptional risks to individuals’ rights and must be justified at every level.
e) Enforcement has clearly moved from theory into active practice
Taken together, these cases show that facial recognition enforcement is no longer speculative or future-oriented. Regulators are already imposing large fines, ordering organisations to stop processing biometric data, coordinating actions across borders, and experimenting with new enforcement tools. For organizations, this removes any remaining sense that facial recognition sits in a legal grey area awaiting clarification. It is now an enforcement-active space where real deployments are scrutinised, and real consequences follow.
Common Mistakes Organizations Still Make with Facial Recognition Technology
1. Ignoring Consent and Legal Basis Requirements for Biometric Data
Some organizations deploy facial recognition cameras in public areas without a valid legal basis or proper consent. Retailers and other private entities have been criticised by privacy authorities for capturing biometric data from hundreds of thousands of people without a clear lawful basis, even when they claim an exemption for tackling misconduct. In a recent case in Spain, Mercadona, which is a supermarket chain, was fined €2.5 million for using facial recognition in 48 stores. The company argued the system was “strictly for security” to identify people with existing restraining orders. However, the Spanish regulator (AEPD) ruled the system was illegal because it scanned everyone who entered, including children and employees, without a valid legal reason.
2. Deploying Systems Without Adequate DPIAs or Risk Assessments
Many organizations deploy facial recognition systems without conducting a robust Data Protection Impact Assessment (DPIA) or doing a superficial one that does not match the real use case. Regulators, including those in Sweden and the UK, have flagged unlawful processing and lack of DPIAs when police and other public bodies used facial recognition tools without proper risk assessment. For example, Swedish authorities penalised police for using facial recognition technology without conducting the necessary DPIA and failing to implement organizational measures that demonstrate compliance.
3. Treating Facial Recognition as “Low-Risk” or Routine Security
Some organizations implement facial recognition as if it were just another security camera upgrade. This mistake emerges when businesses install systems for convenience (e.g., access control, queue management) without recognising the high-risk nature of biometric identification. Regulators expect organisations to justify necessity, explore alternatives, and reason why more privacy-friendly options would not work — mistakes arise when this is skipped or downplayed.
4. Not Providing Clear Transparency or Notice to Individuals
Regulators increasingly criticise facial recognition deployments where people are not told their images are being collected or processed. A notable mishap outside the EU — a vending machine at a Canadian university — revealed hidden facial recognition use because a malfunction exposed the software, and students reported no prior notice or permission. While not a GDPR case, the reaction underscores how a lack of transparency causes compliance and trust problems.
5. Iceberg Effect — Ignoring Bias, Accuracy, and Fairness Issues
Organizations and authorities have reported real problems with bias and accuracy in facial recognition use. For example, UK regulators demanded urgent clarification on racial bias in police facial recognition technology after tests showed significantly higher false positives for Black and Asian individuals compared to White people. Regulators raised concerns about transparency and trust before considering enforcement steps.
Final thought
Facial recognition technology operates under one of the strictest legal frameworks in the GDPR. This is deliberate. Because biometric identification directly affects identity and freedom, the law demands more than efficiency or innovation as justification.
Across consent rules, Article 9 exceptions, DPIAs, and enforcement actions, the message from regulators is consistent: facial recognition must be exceptional, strictly necessary, and tightly controlled. Paper compliance is not enough. Authorities look at how systems work in reality, whether people have real choices, and whether less intrusive alternatives were genuinely considered.
For most organisations, lawful use is possible only in narrow circumstances and with strong safeguards in place. Where those conditions are not met, enforcement risk is no longer theoretical. Facial recognition is judged not by what it can do, but by whether it respects fundamental rights in practice.