The Theory of Artificial Immutability: Protecting Algorithmic Groups
Under Anti-Discrimination Law
- URL: http://arxiv.org/abs/2205.01166v1
- Date: Mon, 2 May 2022 19:19:43 GMT
- Title: The Theory of Artificial Immutability: Protecting Algorithmic Groups
Under Anti-Discrimination Law
- Authors: Sandra Wachter
- Abstract summary: This article examines the legal status of algorithmic groups in North American and European non-discrimination doctrine, law, and jurisprudence.
I propose a new theory of harm - "the theory of artificial immutability" - that aims to bring AI groups within the scope of the law.
- Score: 0.8460698440162889
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) is increasingly used to make important decisions
about people. While issues of AI bias and proxy discrimination are well
explored, less focus has been paid to the harms created by profiling based on
groups that do not map to or correlate with legally protected groups such as
sex or ethnicity. This raises a question: are existing equality laws able to
protect against emergent AI-driven inequality? This article examines the legal
status of algorithmic groups in North American and European non-discrimination
doctrine, law, and jurisprudence and will show that algorithmic groups are not
comparable to traditional protected groups. Nonetheless, these new groups are
worthy of protection. I propose a new theory of harm - "the theory of
artificial immutability" - that aims to bring AI groups within the scope of the
law. My theory describes how algorithmic groups act as de facto immutable
characteristics in practice that limit people's autonomy and prevent them from
achieving important goals.
Related papers
- Strengthening legal protection against discrimination by algorithms and artificial intelligence [1.0406659081400351]
The paper evaluates current legal protection in Europe against discriminatory algorithmic decisions.<n>The paper argues for sector-specific - rather than general - rules, and outlines an approach to regulate algorithmic decision-making.
arXiv Detail & Related papers (2025-10-03T09:54:03Z) - Protected Grounds and the System of Non-Discrimination Law in the Context of Algorithmic Decision-Making and Artificial Intelligence [0.1915265522996079]
This paper explores which system of non-discrimination law can best be applied to algorithmic decision-making.<n>The paper analyses the current loopholes in the protection offered by non-discrimination law and explores the best way for lawmakers to approach algorithmic differentiation.
arXiv Detail & Related papers (2025-09-01T15:21:12Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Generative Discrimination: What Happens When Generative AI Exhibits Bias, and What Can Be Done About It [2.2913283036871865]
chapter explores how genAI intersects with non-discrimination laws.
It highlights two main types of discriminatory outputs: (i) demeaning and abusive content and (ii) subtler biases due to inadequate representation of protected groups.
It argues for holding genAI providers and deployers liable for discriminatory outputs and highlights the inadequacy of traditional legal frameworks to address genAI-specific issues.
arXiv Detail & Related papers (2024-06-26T13:32:58Z) - Non-discrimination law in Europe: a primer for non-lawyers [44.715854387549605]
We aim to describe the law in such a way that non-lawyers and non-European lawyers can easily grasp its contents and challenges.
We introduce the EU-wide non-discrimination rules which are included in a number of EU directives.
The last section broadens the horizon to include bias-relevant law and cases from the EU AI Act, and related statutes.
arXiv Detail & Related papers (2024-04-12T14:59:58Z) - Algorithmic Unfairness through the Lens of EU Non-Discrimination Law: Or
Why the Law is not a Decision Tree [5.153559154345212]
We show that EU non-discrimination law coincides with notions of algorithmic fairness proposed in computer science literature.
We set out the normative underpinnings of fairness metrics and technical interventions and compare these to the legal reasoning of the Court of Justice of the EU.
We conclude with implications for AI practitioners and regulators.
arXiv Detail & Related papers (2023-05-05T12:00:39Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Designing Equitable Algorithms [1.9006392177894293]
Predictive algorithms are now used to help distribute a large share of our society's resources and sanctions.
These algorithms can improve the efficiency and equity of decision-making.
But they could entrench and exacerbate disparities, particularly along racial, ethnic, and gender lines.
arXiv Detail & Related papers (2023-02-17T22:00:44Z) - Multi-dimensional discrimination in Law and Machine Learning -- A
comparative overview [14.650860450187793]
Domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models.
In reality, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic.
Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain.
arXiv Detail & Related papers (2023-02-12T20:41:58Z) - Within-group fairness: A guidance for more sound between-group fairness [1.675857332621569]
We introduce a new concept of fairness so-called within-group fairness.
We develop learning algorithms to control within-group fairness and between-group fairness simultaneously.
Numerical studies show that the proposed learning algorithms improve within-group fairness without sacrificing accuracy as well as between-group fairness.
arXiv Detail & Related papers (2023-01-20T00:39:19Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.