Designing Equitable Algorithms
- URL: http://arxiv.org/abs/2302.09157v1
- Date: Fri, 17 Feb 2023 22:00:44 GMT
- Title: Designing Equitable Algorithms
- Authors: Alex Chohlas-Wood, Madison Coots, Sharad Goel, Julian Nyarko
- Abstract summary: Predictive algorithms are now used to help distribute a large share of our society's resources and sanctions.
These algorithms can improve the efficiency and equity of decision-making.
But they could entrench and exacerbate disparities, particularly along racial, ethnic, and gender lines.
- Score: 1.9006392177894293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predictive algorithms are now used to help distribute a large share of our
society's resources and sanctions, such as healthcare, loans, criminal
detentions, and tax audits. Under the right circumstances, these algorithms can
improve the efficiency and equity of decision-making. At the same time, there
is a danger that the algorithms themselves could entrench and exacerbate
disparities, particularly along racial, ethnic, and gender lines. To help
ensure their fairness, many researchers suggest that algorithms be subject to
at least one of three constraints: (1) no use of legally protected features,
such as race, ethnicity, and gender; (2) equal rates of "positive" decisions
across groups; and (3) equal error rates across groups. Here we show that these
constraints, while intuitively appealing, often worsen outcomes for individuals
in marginalized groups, and can even leave all groups worse off. The inherent
trade-off we identify between formal fairness constraints and welfare
improvements -- particularly for the marginalized -- highlights the need for a
more robust discussion on what it means for an algorithm to be "fair". We
illustrate these ideas with examples from healthcare and the criminal-legal
system, and make several proposals to help practitioners design more equitable
algorithms.
Related papers
- Intuitions of Compromise: Utilitarianism vs. Contractualism [42.3322948655612]
We use a paradigm that applies algorithms to aggregating preferences across groups in a social decision-making context.
While the dominant approach to value aggregation up to now has been utilitarian, we find that people strongly prefer the aggregations recommended by the contractualist algorithm.
arXiv Detail & Related papers (2024-10-07T21:05:57Z) - Fair Enough? A map of the current limitations of the requirements to have fair algorithms [43.609606707879365]
We argue that there is a hiatus between what the society is demanding from Automated Decision-Making systems, and what this demand actually means in real-world scenarios.
We outline the key features of such a hiatus and pinpoint a set of crucial open points that we as a society must address in order to give a concrete meaning to the increasing demand of fairness in Automated Decision-Making systems.
arXiv Detail & Related papers (2023-11-21T08:44:38Z) - Fairness and Bias in Algorithmic Hiring: a Multidisciplinary Survey [43.463169774689646]
This survey caters to practitioners and researchers with a balanced and integrated coverage of systems, biases, measures, mitigation strategies, datasets, and legal aspects of algorithmic hiring and fairness.
Our work supports a contextualized understanding and governance of this technology by highlighting current opportunities and limitations, providing recommendations for future work to ensure shared benefits for all stakeholders.
arXiv Detail & Related papers (2023-09-25T08:04:18Z) - Algorithms, Incentives, and Democracy [0.0]
We show how optimal classification by an algorithm designer can affect the distribution of behavior in a population.
We then look at the effect of democratizing the rewards and punishments, or stakes, to the algorithmic classification to consider how a society can potentially stem (or facilitate!) predatory classification.
arXiv Detail & Related papers (2023-07-05T14:22:01Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Detection of Groups with Biased Representation in Ranking [28.095668425175564]
We study the problem of detecting groups with biased representation in the top-$k$ ranked items.
We propose efficient search algorithms for two different fairness measures.
arXiv Detail & Related papers (2022-12-30T10:50:02Z) - Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial
Corruptions [98.75618795470524]
We study the linear contextual bandit problem in the presence of adversarial corruption.
We propose a new algorithm based on the principle of optimism in the face of uncertainty.
arXiv Detail & Related papers (2022-05-13T17:58:58Z) - Fair Algorithm Design: Fair and Efficacious Machine Scheduling [0.0]
There is often a dichotomy between fairness and efficacy: fair algorithms may proffer low social welfare solutions whereas welfare optimizing algorithms may be very unfair.
In this paper, we prove that this dichotomy can be overcome if we allow for a negligible amount of bias.
arXiv Detail & Related papers (2022-04-13T14:56:22Z) - Learning to be Fair: A Consequentialist Approach to Equitable
Decision-Making [21.152377319502705]
We present an alternative framework for designing equitable algorithms.
In our approach, one first elicits stakeholder preferences over the space of possible decisions.
We then optimize over the space of decision policies, making trade-offs in a way that maximizes the elicited utility.
arXiv Detail & Related papers (2021-09-18T00:30:43Z) - Efficient First-Order Contextual Bandits: Prediction, Allocation, and
Triangular Discrimination [82.52105963476703]
A recurring theme in statistical learning, online learning, and beyond is that faster convergence rates are possible for problems with low noise.
First-order guarantees are relatively well understood in statistical and online learning.
We show that the logarithmic loss and an information-theoretic quantity called the triangular discrimination play a fundamental role in obtaining first-order guarantees.
arXiv Detail & Related papers (2021-07-05T19:20:34Z) - Pursuing Open-Source Development of Predictive Algorithms: The Case of
Criminal Sentencing Algorithms [0.0]
We argue that open-source algorithm development should be the standard in highly consequential contexts.
We suggest these issues are exacerbated by the proprietary and expensive nature of virtually all widely used criminal sentencing algorithms.
arXiv Detail & Related papers (2020-11-12T14:53:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.