Using sensitive data to prevent discrimination by artificial
intelligence: Does the GDPR need a new exception?
- URL: http://arxiv.org/abs/2206.03262v3
- Date: Mon, 28 Nov 2022 11:34:36 GMT
- Title: Using sensitive data to prevent discrimination by artificial
intelligence: Does the GDPR need a new exception?
- Authors: Marvin van Bekkum, Frederik Zuiderveen Borgesius
- Abstract summary: In Europe, an organisation runs into a problem when it wants to assess whether its AI system accidentally discriminates based on ethnicity.
In principle, the bans the use of certain'special categories of data'
This paper asks whether the rules on special categories of personal data hinder the prevention of AI-driven discrimination.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Organisations can use artificial intelligence to make decisions about people
for a variety of reasons, for instance, to select the best candidates from many
job applications. However, AI systems can have discriminatory effects when used
for decision-making. To illustrate, an AI system could reject applications of
people with a certain ethnicity, while the organisation did not plan such
ethnicity discrimination. But in Europe, an organisation runs into a problem
when it wants to assess whether its AI system accidentally discriminates based
on ethnicity: the organisation may not know the applicants' ethnicity. In
principle, the GDPR bans the use of certain 'special categories of data'
(sometimes called 'sensitive data'), which include data on ethnicity, religion,
and sexual preference. The proposal for an AI Act of the European Commission
includes a provision that would enable organisations to use special categories
of data for auditing their AI systems. This paper asks whether the GDPR's rules
on special categories of personal data hinder the prevention of AI-driven
discrimination. We argue that the GDPR does prohibit such use of special
category data in many circumstances. We also map out the arguments for and
against creating an exception to the GDPR's ban on using special categories of
personal data, to enable preventing discrimination by AI systems. The paper
discusses European law, but the paper can be relevant outside Europe too, as
many policymakers in the world grapple with the tension between privacy and
non-discrimination policy.
Related papers
- Generative Discrimination: What Happens When Generative AI Exhibits Bias, and What Can Be Done About It [2.2913283036871865]
chapter explores how genAI intersects with non-discrimination laws.
It highlights two main types of discriminatory outputs: (i) demeaning and abusive content and (ii) subtler biases due to inadequate representation of protected groups.
It argues for holding genAI providers and deployers liable for discriminatory outputs and highlights the inadequacy of traditional legal frameworks to address genAI-specific issues.
arXiv Detail & Related papers (2024-06-26T13:32:58Z) - Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Non-discrimination law in Europe: a primer for non-lawyers [44.715854387549605]
We aim to describe the law in such a way that non-lawyers and non-European lawyers can easily grasp its contents and challenges.
We introduce the EU-wide non-discrimination rules which are included in a number of EU directives.
The last section broadens the horizon to include bias-relevant law and cases from the EU AI Act, and related statutes.
arXiv Detail & Related papers (2024-04-12T14:59:58Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Bias and Discrimination in AI: a cross-disciplinary perspective [5.190307793476366]
We show that finding solutions to bias and discrimination in AI requires robust cross-disciplinary collaborations.
We survey relevant literature about bias and discrimination in AI from an interdisciplinary perspective that embeds technical, legal, social and ethical dimensions.
arXiv Detail & Related papers (2020-08-11T10:02:04Z) - A Normative approach to Attest Digital Discrimination [6.372554934045607]
Examples include low-income neighbourhood's targeted with high-interest loans or low credit scores, and women being undervalued by 21% in online marketing.
We use norms as an abstraction to represent different situations that may lead to digital discrimination.
In particular, we formalise non-discrimination norms in the context of ML systems and propose an algorithm to check whether ML systems violate these norms.
arXiv Detail & Related papers (2020-07-14T15:14:52Z) - Why Fairness Cannot Be Automated: Bridging the Gap Between EU
Non-Discrimination Law and AI [10.281644134255576]
Article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness.
We show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate.
We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement.
arXiv Detail & Related papers (2020-05-12T16:30:12Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.