Explainable Disparity Compensation for Efficient Fair Ranking
- URL: http://arxiv.org/abs/2307.14366v2
- Date: Fri, 19 Apr 2024 23:12:14 GMT
- Title: Explainable Disparity Compensation for Efficient Fair Ranking
- Authors: Abraham Gale, Amélie Marian,
- Abstract summary: Ranking functions that are used in decision systems often produce disparate results for different populations because of bias in the underlying data.
Recent compensatory measures have mostly focused on opaque transformations of the ranking functions to satisfy fairness guarantees.
In this paper we propose easily explainable data-driven compensatory measures for ranking functions.
- Score: 0.3759936323189418
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ranking functions that are used in decision systems often produce disparate results for different populations because of bias in the underlying data. Addressing, and compensating for, these disparate outcomes is a critical problem for fair decision-making. Recent compensatory measures have mostly focused on opaque transformations of the ranking functions to satisfy fairness guarantees or on the use of quotas or set-asides to guarantee a minimum number of positive outcomes to members of underrepresented groups. In this paper we propose easily explainable data-driven compensatory measures for ranking functions. Our measures rely on the generation of bonus points given to members of underrepresented groups to address disparity in the ranking function. The bonus points can be set in advance, and can be combined, allowing for considering the intersections of representations and giving better transparency to stakeholders. We propose efficient sampling-based algorithms to calculate the number of bonus points to minimize disparity. We validate our algorithms using real-world school admissions and recidivism datasets, and compare our results with that of existing fair ranking algorithms.
Related papers
- Reward-Augmented Data Enhances Direct Preference Alignment of LLMs [63.32585910975191]
We introduce reward-conditioned Large Language Models (LLMs) that learn from the entire spectrum of response quality within the dataset.
We propose an effective yet simple data relabeling method that conditions the preference pairs on quality scores to construct a reward-augmented dataset.
arXiv Detail & Related papers (2024-10-10T16:01:51Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Fairness in Ranking under Disparate Uncertainty [24.401219403555814]
We argue that ranking can introduce unfairness if the uncertainty of the underlying relevance model differs between groups of options.
We propose Equal-Opportunity Ranking (EOR) as a new fairness criterion for ranking.
We show that EOR corresponds to a group-wise fair lottery among the relevant options even in the presence of disparate uncertainty.
arXiv Detail & Related papers (2023-09-04T13:49:48Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Sampling Individually-Fair Rankings that are Always Group Fair [9.333939443470944]
A fair ranking task asks to rank a set of items to maximize utility subject to satisfying group-fairness constraints.
Recent works identify uncertainty in the utilities of items as a primary cause of unfairness.
We give an efficient algorithm that samples rankings from an individually-fair distribution while ensuring that every output ranking is group fair.
arXiv Detail & Related papers (2023-06-21T01:26:34Z) - Correcting Underrepresentation and Intersectional Bias for Classification [49.1574468325115]
We consider the problem of learning from data corrupted by underrepresentation bias.
We show that with a small amount of unbiased data, we can efficiently estimate the group-wise drop-out rates.
We show that our algorithm permits efficient learning for model classes of finite VC dimension.
arXiv Detail & Related papers (2023-06-19T18:25:44Z) - Detection of Groups with Biased Representation in Ranking [28.095668425175564]
We study the problem of detecting groups with biased representation in the top-$k$ ranked items.
We propose efficient search algorithms for two different fairness measures.
arXiv Detail & Related papers (2022-12-30T10:50:02Z) - RAGUEL: Recourse-Aware Group Unfairness Elimination [2.720659230102122]
'Algorithmic recourse' offers feasible recovery actions to change unwanted outcomes.
We introduce the notion of ranked group-level recourse fairness.
We develop a'recourse-aware ranking' solution that satisfies ranked recourse fairness constraints.
arXiv Detail & Related papers (2022-08-30T11:53:38Z) - PiRank: Learning To Rank via Differentiable Sorting [85.28916333414145]
We propose PiRank, a new class of differentiable surrogates for ranking.
We show that PiRank exactly recovers the desired metrics in the limit of zero temperature.
arXiv Detail & Related papers (2020-12-12T05:07:36Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.