Features
How the EU AI Act Supplements GDPR in the Protection of Personal Data
Published: June 18, 2025

Marc Schuler Taylor Wessing Paris, France Data Protection Committee
After several months of heightened anticipation in the legal and technology spheres, the European Parliament adopted the Regulation 2024/1689 laying down harmonized rules on artificial intelligence (more generally known as the AI Act) on June 13, 2024. The regulation came into force on August 1, 2024, and will be implemented progressively over three years (that is, until August 2, 2027).
The AI Act is one of the world’s first pieces of legislation to regulate artificial intelligence (AI) from its development to its implementation. The scope of this regulation is particularly broad, both in terms of subject matter and the parties involved. It indeed applies to all parties who operate commercially within the EU, including non-E.U. residents.
It is therefore essential that all economic actors develop a comprehensive understanding of the legal implications of this new regulation, as it might impact them sooner rather than later.
Purpose and Substance of the AI Act
The purpose of the AI Act is to regulate the use of AI while promoting its development in a safe and ethical environment. To this end, it distinguishes between AI systems and general-purpose models.
For AI systems, the AI Act adopts a risk-based approach, laying down specific requirements depending on the level of risk of the AI system:
- Certain AI systems that infringe on fundamental rights, such as AI systems used for social scoring, are considered to pose an “unacceptable risk” and are subsequently prohibited.
- AI systems that might impact an individual’s security or fundamental rights, such as AI systems used for recruitment, are considered to pose a “high risk.” These systems are bound by stricter requirements, including conformity assessments and risk management processes.
- Other AI systems are considered to pose either a “limited risk” or a “minimal to non-existent risk.” These systems are subject at most to a documentary or transparency obligation, depending on their nature.
For “general-purpose models” (GP models) and “AI systems based on these models” (GPAI systems), the AI Act adopts a different approach. General-purpose models are AI models which are trained with a large amount of data and are subsequently able to self-supervise in the conduction of a large number of tasks. Generative AI, for instance, is a field where most AI systems are based on GP models.
The AI Act provides a specific regime for GP models that present a “systemic risk,” in that they have capabilities that match or exceed the capabilities recorded in the most advanced GP models. The providers of these models must implement risk mitigation procedures and inform competent authorities in case of an incident.
For AI systems, the AI Act adopts a risk-based approach, laying down specific requirements depending on the level of risk of the AI system.
Other GP models are only subject to a documentation obligation, so that providers of GPAI systems can understand the functioning of the GP model that their AI system is based on and subsequently comply with their obligations under the AI Act.
The AI Act further provides that generative AI systems and chatbots (whether they are “traditional” AI systems or GPAI systems) are subject to transparency obligations. Users of a chatbot must be informed that they are interacting with an AI, and the content generated via an AI system must be identified as such.
Through this categorization of AI systems, the AI Act aims at structuring the development of AI while providing adequate protection for the users of AI systems, and, more generally, for EU citizens.
How the AI Act Affects the GDPR Requirements
Some AI systems involve the processing of personal data, whether at the development or deployment stage. In such cases, both the AI Act and the General Data Protection Regulation (GDPR) will apply, which raises the question of the connections between these two regulations.
In all cases, AI systems involving the processing of personal data must meet the GDPR requirements. Where the AI Act does not specify data governance rules, the GDPR applies traditionally. For instance, the qualification of data controller is not detailed under the AI Act. As such, to determine who is the data controller for the processing of personal data in an AI system, one must refer to the definition set under the GDPR.
Subject to further assessment on a case-by-case basis, this definition may generally lead to qualify as a data controller:
- The provider of the AI system or AI model during the development phase; and
- The deployer or user of the AI system during the deployment phase.
This notion of a controller is central, as brand owners using AI technologies must understand how liabilities are allocated among the different stakeholders in these two phases. Even as a simple deployer or user of the AI system, the brand owner might be considered a data controller and have obligations under the GDPR. A typical example would be the deployment of a chatbot as part of consumer service, which would process personal data such as the name or bank information of the client.
Through specific provisions, the AI Act establishes a balance between the imperatives of personal data protection and public interest. This balance implies maintaining and applying the GDPR requirements in full, with very limited and always justified exceptions.
In all cases, AI systems involving the processing of personal data must meet the GDPR requirements.
In some specific cases, the AI Act supplements the GDPR, either to impose further obligations or to implement specific exceptions such as the following:
Article 5 of the AI Act allows, under specific conditions, the use of “real-time” remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement. This may be seen as an exception to the GDPR principle, which prohibits the processing of biometric data for identification purposes.
Article 10 of the AI Act allows providers of AI systems to process sensitive data when strictly necessary for the purpose of ensuring bias detection and correction in relation to high-risk AI systems. However, the processing of the sensitive data must answer to appropriate safeguards for the fundamental rights and freedoms of individuals.
To this end, the AI Act furthers the obligations the GDPR provides for the exceptional processing of sensitive data. It adds, among others, the obligation to implement measures of reinforced security, such as pseudonymization and nontransmission.
Article 14 of the AI Act requires that high-risk AI systems be designed in such a way as to enable effective monitoring by natural persons. Consequently, it prohibits deployers from taking any automated decision or measure based on AI-generated content without further verification by at least two natural persons.
Article 59 of the AI Act allows the exceptional re-use of personal data, including sensitive data, for the training of AI systems developed as part of an AI regulatory sandbox.
An AI regulatory sandbox is a controlled environment that allows AI providers to test their innovation in real-world conditions under regulatory supervision. These sandboxes are put in place to encourage innovation in public interest fields, such as health care.
Learn More About the EU AI Act at TMAP AI in the EU—Navigating IP Opportunities and Challenges As AI transforms industries, IP rights in the European Union are evolving rapidly. This session will take an in-depth look at key legislation, such as the EU AI Act and the Digital Services Act, and discuss how these laws and regulations shape the landscape for AI innovation and IP protection. In addition, the panelists will delve into current AI-related IP policies, enforcement challenges, and anticipated future regulatory shifts impacting businesses and rights holders. Moderator: Catherine Rifai, Senior Associate, Munck Wilson Mandala LLP (USA) Speakers:
Monday, September 29, 9:30 am–10:30 am |
How the AI Act Takes Personal Data into Consideration
The AI Act reaffirms the fundamental right to the protection of personal data and explicitly follows the European legislation pertaining to this right. It recognizes the adverse impact the development of AI may have on the protection of personal data and calls for transparency in AI models and systems to offset this risk. It also highlights that compliance with GDPR requirements contributes to the proper functioning of AI systems.
Brand owners are encouraged to anonymize and encrypt the personal data they use, or to use any other technology to prevent the raw copying of the structured data.
It further emphasizes the ethical use and governance of data, including personal data that brand owners may process in the development of their own AI system or in the deployment and use of a provided AI system. This ethical use and governance rely on the principles of “data minimisation” and “data protection by design and by default,” as set out in the GDPR. Brand owners are encouraged to anonymize and encrypt the personal data they use, or to use any other technology to prevent the raw copying of the structured data.
To ensure the protection of personal data, the AI Act does the following:
- It limits the use of personal data. For instance, it specifies that the exception for free and open-source AI components does not apply if the AI component is monetized through the use of personal data (for reasons other than improving the security, compatibility, or interoperability of the software, with the exception of transactions between microenterprises).
- It focuses on the notion of consent in the processing of personal data, by stating that individuals who consent to participate in testing under an AI sandbox can decide to withdraw their consent to the processing of their personal data at any time.
- It enjoins each member state to establish or designate a national authority competent for the purposes of the AI Act. This national authority will act as a guarantor of the protection of personal data in the context of the Regulation.
- It makes GDPR compliance and personal data protection safeguards essential conditions for obtaining the EU Declaration of Conformity, which is a prerequisite for deploying high-risk AI systems.
Last but not least, it is worth noting that complying with the AI Act is likely to generate personal data processing. For instance, Articles 12 and 19 of the AI Act require the automatic recording of logs generated by high-risk AI systems. Yet, logs may also contain personal data (for example, in relation to users) that will have to be processed in compliance with the GDPR.
The AI Act mostly serves as an ethical interpretative guide to the GDPR. In most cases, the brand owner will only have to apply the GDPR requirements to the AI environment. The instances in which the AI Act supplements the GDPR mostly concern the development of AI systems, which in turn concerns a smaller number of actors (most brand owners deploy or use AI systems provided by a third party). As such, one could think that mastering the GDPR would suffice when resorting to AI systems.
Understanding how personal data and AI interact is, however, increasingly necessary for brand owners. Failure to implement measures for the protection of personal data in the development or deployment of an AI system can indeed result in fines for infringing the GDPR requirements. In that regard, the AI Act must be construed as a true ally, for it provides brand owners with keys to interpret and comply with the GDPR.
Although every effort has been made to verify the accuracy of this article, readers are urged to check independently on matters of specific concern or interest.
© 2025 International Trademark Association