On 17 December 2024, the European Data Protection Board (EDPB) issued Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models, which offered timely guidance on AI issues, addressing questions on the anonymity of AI models, how controllers can demonstrate legitimate interest as a legal basis and how unlawful processing of personal data in the development phase affects the lawfulness operation of the AI model. Introducing new concepts The EDPB provides its own interpretation of certain stages in the AI models' life cycle, such as 'development' and 'deployment'.
Development of an AI model encompasses all stages preceding deployment, including code creation, collection of personal data for training, pre-processing of that data, and the training process. The deployment phase, by contrast, refers to all stages where the AI model is actively used following the development phase. Anonymity of the AI model According to the EDPB, the determination of whether an AI model is anonymous should be assessed on a case-by-case basis. For an AI model to be anonymous, the likelihood of directly or probabilistically extracting personal data about individuals whose data were used for training must be negligible while the risk of obtaining this personal data (even unintentionally) through queries should be insignificant. The EDPB stresses a need for comprehensive evaluation of the likelihood of identification to determine whether an AI model can be classified as anonymous, which follows Recital 26 of the GDPR on the reasonable means that could be used by the data controller or any other party and should account for potential unintended reuse or disclosure of the model. Developing and deploying an AI model based on legitimate interest The EDPB asserts that legitimate interest under Art. 6(1)(f) of the GDPR cannot be the "by default" legal basis for the processing of personal data for the training and use of AI models.
It is only acceptable when the data controller can demonstrate, through a three-step legitimate interest balancing test, that the data processing related to the AI model is proportionate, necessary, and effective in achieving the intended purpose. The EDPB places significant emphasis on the reasonable expectations of data subjects, as well as on mitigating measures to be applied during the data processing such as the following:
AI models trained on unlawfully obtained personal data The EDPB focuses on how unlawful processing in the development phase can impact the lawfulness (i.e. compliance with Art. 5(1)(a) GDPR and Article 6 of the GDPR) of the subsequent processing or operation of the AI model, and states that where there is an established breach of Art. 5 or 6 of the GDPR regarding the development phase, supervisory authorities might order stringent corrective measures such as issuing fines, imposing temporary limitations on processing, erasing part or all of the dataset or ordering the retraining of the AI model. The opinion does not address how these obligations for the data controller translate into obligations for the parties along the AI value chain as defined in the AI Act (i.e. for developers, providers or deployers), which are currently assessed on a case-by-case basis.
What's next? The Opinion underscores the EDPB's commitment to ensuring that AI development adheres to GDPR principles, balancing innovation with privacy rights. The parties along the AI supply chain who may act as data controllers during the development or deployment of AI models should pay close attention to these guidelines and incorporate its criteria when developing their AI training and implementation policies. For more information on this Opinion and AI regulations in the EU, contact your CMS client partner or our CMS experts.