Generative Discrimination: What Happens When Generative AI Exhibits Bias, and What Can Be Done About It
- URL: http://arxiv.org/abs/2407.10329v1
- Date: Wed, 26 Jun 2024 13:32:58 GMT
- Title: Generative Discrimination: What Happens When Generative AI Exhibits Bias, and What Can Be Done About It
- Authors: Philipp Hacker, Brent Mittelstadt, Frederik Zuiderveen Borgesius, Sandra Wachter,
- Abstract summary: chapter explores how genAI intersects with non-discrimination laws.
It highlights two main types of discriminatory outputs: (i) demeaning and abusive content and (ii) subtler biases due to inadequate representation of protected groups.
It argues for holding genAI providers and deployers liable for discriminatory outputs and highlights the inadequacy of traditional legal frameworks to address genAI-specific issues.
- Score: 2.2913283036871865
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As generative Artificial Intelligence (genAI) technologies proliferate across sectors, they offer significant benefits but also risk exacerbating discrimination. This chapter explores how genAI intersects with non-discrimination laws, identifying shortcomings and suggesting improvements. It highlights two main types of discriminatory outputs: (i) demeaning and abusive content and (ii) subtler biases due to inadequate representation of protected groups, which may not be overtly discriminatory in individual cases but have cumulative discriminatory effects. For example, genAI systems may predominantly depict white men when asked for images of people in important jobs. This chapter examines these issues, categorizing problematic outputs into three legal categories: discriminatory content; harassment; and legally hard cases like unbalanced content, harmful stereotypes or misclassification. It argues for holding genAI providers and deployers liable for discriminatory outputs and highlights the inadequacy of traditional legal frameworks to address genAI-specific issues. The chapter suggests updating EU laws, including the AI Act, to mitigate biases in training and input data, mandating testing and auditing, and evolving legislation to enforce standards for bias mitigation and inclusivity as technology advances.
Related papers
- The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - Hacking a surrogate model approach to XAI [49.1574468325115]
We show that even if a discriminated subgroup does not get a positive decision from the black box ADM system, the corresponding question of group membership can be pushed down onto a level as low as wanted.
Our approach can be generalized easily to other surrogate models.
arXiv Detail & Related papers (2024-06-24T13:18:02Z) - Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness [1.5029560229270191]
The topic of fairness in AI has sparked meaningful discussions in the past years.
From a legal perspective, many open questions remain.
The AI Act might present a tremendous step towards bridging these two approaches.
arXiv Detail & Related papers (2024-03-29T09:54:09Z) - Queering the ethics of AI [0.6993026261767287]
The chapter emphasizes the ethical concerns surrounding the potential for AI to perpetuate discrimination.
The chapter argues that a critical examination of the conception of equality that often underpins non-discrimination law is necessary.
arXiv Detail & Related papers (2023-08-25T17:26:05Z) - Multi-dimensional discrimination in Law and Machine Learning -- A
comparative overview [14.650860450187793]
Domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models.
In reality, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic.
Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain.
arXiv Detail & Related papers (2023-02-12T20:41:58Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Bias and Discrimination in AI: a cross-disciplinary perspective [5.190307793476366]
We show that finding solutions to bias and discrimination in AI requires robust cross-disciplinary collaborations.
We survey relevant literature about bias and discrimination in AI from an interdisciplinary perspective that embeds technical, legal, social and ethical dimensions.
arXiv Detail & Related papers (2020-08-11T10:02:04Z) - Why Fairness Cannot Be Automated: Bridging the Gap Between EU
Non-Discrimination Law and AI [10.281644134255576]
Article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness.
We show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate.
We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement.
arXiv Detail & Related papers (2020-05-12T16:30:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.