Affirmative Algorithms: The Legal Grounds for Fairness as Awareness
- URL: http://arxiv.org/abs/2012.14285v1
- Date: Fri, 18 Dec 2020 22:53:20 GMT
- Title: Affirmative Algorithms: The Legal Grounds for Fairness as Awareness
- Authors: Daniel E. Ho and Alice Xiang
- Abstract summary: We discuss how such approaches will likely be deemed "algorithmic affirmative action"
We argue that the government-contracting cases offer an alternative grounding for algorithmic fairness.
We call for more research at the intersection of algorithmic fairness and causal inference to ensure that bias mitigation is tailored to specific causes and mechanisms of bias.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While there has been a flurry of research in algorithmic fairness, what is
less recognized is that modern antidiscrimination law may prohibit the adoption
of such techniques. We make three contributions. First, we discuss how such
approaches will likely be deemed "algorithmic affirmative action," posing
serious legal risks of violating equal protection, particularly under the
higher education jurisprudence. Such cases have increasingly turned toward
anticlassification, demanding "individualized consideration" and barring
formal, quantitative weights for race regardless of purpose. This case law is
hence fundamentally incompatible with fairness in machine learning. Second, we
argue that the government-contracting cases offer an alternative grounding for
algorithmic fairness, as these cases permit explicit and quantitative
race-based remedies based on historical discrimination by the actor. Third,
while limited, this doctrinal approach also guides the future of algorithmic
fairness, mandating that adjustments be calibrated to the entity's
responsibility for historical discrimination causing present-day disparities.
The contractor cases provide a legally viable path for algorithmic fairness
under current constitutional doctrine but call for more research at the
intersection of algorithmic fairness and causal inference to ensure that bias
mitigation is tailored to specific causes and mechanisms of bias.
Related papers
- Randomization Techniques to Mitigate the Risk of Copyright Infringement [48.75580082851766]
We investigate potential randomization approaches that can complement current practices for copyright protection.
This is motivated by the inherent ambiguity of the rules that determine substantial similarity in copyright precedents.
Similar randomized approaches, such as differential privacy, have been successful in mitigating privacy risks.
arXiv Detail & Related papers (2024-08-21T20:55:00Z) - Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - Compatibility of Fairness Metrics with EU Non-Discrimination Laws:
Demographic Parity & Conditional Demographic Disparity [3.5607241839298878]
Empirical evidence suggests that algorithmic decisions driven by Machine Learning (ML) techniques threaten to discriminate against legally protected groups or create new sources of unfairness.
This work aims at assessing up to what point we can assure legal fairness through fairness metrics and under fairness constraints.
Our experiments and analysis suggest that AI-assisted decision-making can be fair from a legal perspective depending on the case at hand and the legal justification.
arXiv Detail & Related papers (2023-06-14T09:38:05Z) - Algorithmic Unfairness through the Lens of EU Non-Discrimination Law: Or
Why the Law is not a Decision Tree [5.153559154345212]
We show that EU non-discrimination law coincides with notions of algorithmic fairness proposed in computer science literature.
We set out the normative underpinnings of fairness metrics and technical interventions and compare these to the legal reasoning of the Court of Justice of the EU.
We conclude with implications for AI practitioners and regulators.
arXiv Detail & Related papers (2023-05-05T12:00:39Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Beyond Incompatibility: Trade-offs between Mutually Exclusive Fairness Criteria in Machine Learning and Law [2.959308758321417]
We present a novel algorithm (FAir Interpolation Method: FAIM) for continuously interpolating between three fairness criteria.
We demonstrate the effectiveness of our algorithm when applied to synthetic data, the COMPAS data set, and a new, real-world data set from the e-commerce sector.
arXiv Detail & Related papers (2022-12-01T12:47:54Z) - Developing a Philosophical Framework for Fair Machine Learning: Lessons
From The Case of Algorithmic Collusion [0.0]
As machine learning algorithms are applied in new contexts the harms and injustices that result are qualitatively different.
The existing research paradigm in machine learning which develops metrics and definitions of fairness cannot account for these qualitatively different types of injustice.
I propose an ethical framework for researchers and practitioners in machine learning seeking to develop and apply fairness metrics.
arXiv Detail & Related papers (2022-07-05T16:21:56Z) - Distributive Justice and Fairness Metrics in Automated Decision-making:
How Much Overlap Is There? [0.0]
We show that metrics implementing equality of opportunity only apply when resource allocations are based on deservingness, but fail when allocations should reflect concerns about egalitarianism, sufficiency, and priority.
We argue that by cleanly distinguishing between prediction tasks and decision tasks, research on fair machine learning could take better advantage of the rich literature on distributive justice.
arXiv Detail & Related papers (2021-05-04T12:09:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.