De spanning tussen het non-discriminatierecht en het gegevensbeschermingsrecht: heeft de AVG een nieuwe uitzondering nodig om discriminatie door kunstmatige intelligentie tegen te gaan?
- URL: http://arxiv.org/abs/2509.08836v1
- Date: Mon, 01 Sep 2025 15:08:47 GMT
- Title: De spanning tussen het non-discriminatierecht en het gegevensbeschermingsrecht: heeft de AVG een nieuwe uitzondering nodig om discriminatie door kunstmatige intelligentie tegen te gaan?
- Authors: Marvin van Bekkum, Frederik Zuiderveen Borgesius,
- Abstract summary: In Europe, an organisation runs into a problem when it wants to assess whether its AI system accidentally discriminates based on ethnicity.<n>In principle, the bans the use of certain'special categories of data'<n>This paper asks whether the rules on special categories of personal data hinder the prevention of AI-driven discrimination.
- Score: 0.314299983260895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Organisations can use artificial intelligence to make decisions about people for a variety of reasons, for instance, to select the best candidates from many job applications. However, AI systems can have discriminatory effects when used for decision-making. To illustrate, an AI system could reject applications of people with a certain ethnicity, while the organisation did not plan such ethnicity discrimination. But in Europe, an organisation runs into a problem when it wants to assess whether its AI system accidentally discriminates based on ethnicity: the organisation may not know the applicants' ethnicity. In principle, the GDPR bans the use of certain 'special categories of data' (sometimes called 'sensitive data'), which include data on ethnicity, religion, and sexual preference. The proposal for an AI Act of the European Commission includes a provision that would enable organisations to use special categories of data for auditing their AI systems. This paper asks whether the GDPR's rules on special categories of personal data hinder the prevention of AI-driven discrimination. We argue that the GDPR does prohibit such use of special category data in many circumstances. We also map out the arguments for and against creating an exception to the GDPR's ban on using special categories of personal data, to enable preventing discrimination by AI systems. The paper discusses European law, but the paper can be relevant outside Europe too, as many policymakers in the world grapple with the tension between privacy and non-discrimination policy.
Related papers
- Strengthening legal protection against discrimination by algorithms and artificial intelligence [1.0406659081400351]
The paper evaluates current legal protection in Europe against discriminatory algorithmic decisions.<n>The paper argues for sector-specific - rather than general - rules, and outlines an approach to regulate algorithmic decision-making.
arXiv Detail & Related papers (2025-10-03T09:54:03Z) - It's complicated. The relationship of algorithmic fairness and non-discrimination regulations for high-risk systems in the EU AI Act [2.9914612342004503]
The EU has recently passed the AI Act, which mandates specific rules for high-risk systems.<n>This paper aims to bridge the concepts of legal non-discrimination regulations and machine learning based algorithmic fairness concepts.
arXiv Detail & Related papers (2025-01-22T15:38:09Z) - Generative Discrimination: What Happens When Generative AI Exhibits Bias, and What Can Be Done About It [2.2913283036871865]
chapter explores how genAI intersects with non-discrimination laws.
It highlights two main types of discriminatory outputs: (i) demeaning and abusive content and (ii) subtler biases due to inadequate representation of protected groups.
It argues for holding genAI providers and deployers liable for discriminatory outputs and highlights the inadequacy of traditional legal frameworks to address genAI-specific issues.
arXiv Detail & Related papers (2024-06-26T13:32:58Z) - Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Non-discrimination law in Europe: a primer for non-lawyers [44.715854387549605]
We aim to describe the law in such a way that non-lawyers and non-European lawyers can easily grasp its contents and challenges.
We introduce the EU-wide non-discrimination rules which are included in a number of EU directives.
The last section broadens the horizon to include bias-relevant law and cases from the EU AI Act, and related statutes.
arXiv Detail & Related papers (2024-04-12T14:59:58Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Using sensitive data to prevent discrimination by artificial
intelligence: Does the GDPR need a new exception? [0.0]
In Europe, an organisation runs into a problem when it wants to assess whether its AI system accidentally discriminates based on ethnicity.
In principle, the bans the use of certain'special categories of data'
This paper asks whether the rules on special categories of personal data hinder the prevention of AI-driven discrimination.
arXiv Detail & Related papers (2022-05-17T07:39:25Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Why Fairness Cannot Be Automated: Bridging the Gap Between EU
Non-Discrimination Law and AI [10.281644134255576]
Article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness.
We show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate.
We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement.
arXiv Detail & Related papers (2020-05-12T16:30:12Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.