RAGUEL: Recourse-Aware Group Unfairness Elimination
- URL: http://arxiv.org/abs/2208.14175v1
- Date: Tue, 30 Aug 2022 11:53:38 GMT
- Title: RAGUEL: Recourse-Aware Group Unfairness Elimination
- Authors: Aparajita Haldar, Teddy Cunningham, Hakan Ferhatosmanoglu
- Abstract summary: 'Algorithmic recourse' offers feasible recovery actions to change unwanted outcomes.
We introduce the notion of ranked group-level recourse fairness.
We develop a'recourse-aware ranking' solution that satisfies ranked recourse fairness constraints.
- Score: 2.720659230102122
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While machine learning and ranking-based systems are in widespread use for
sensitive decision-making processes (e.g., determining job candidates,
assigning credit scores), they are rife with concerns over unintended biases in
their outcomes, which makes algorithmic fairness (e.g., demographic parity,
equal opportunity) an objective of interest. 'Algorithmic recourse' offers
feasible recovery actions to change unwanted outcomes through the modification
of attributes. We introduce the notion of ranked group-level recourse fairness,
and develop a 'recourse-aware ranking' solution that satisfies ranked recourse
fairness constraints while minimizing the cost of suggested modifications. Our
solution suggests interventions that can reorder the ranked list of database
records and mitigate group-level unfairness; specifically, disproportionate
representation of sub-groups and recourse cost imbalance. This re-ranking
identifies the minimum modifications to data points, with these attribute
modifications weighted according to their ease of recourse. We then present an
efficient block-based extension that enables re-ranking at any granularity
(e.g., multiple brackets of bank loan interest rates, multiple pages of search
engine results). Evaluation on real datasets shows that, while existing methods
may even exacerbate recourse unfairness, our solution -- RAGUEL --
significantly improves recourse-aware fairness. RAGUEL outperforms alternatives
at improving recourse fairness, through a combined process of counterfactual
generation and re-ranking, whilst remaining efficient for large-scale datasets.
Related papers
- Reward-Augmented Data Enhances Direct Preference Alignment of LLMs [63.32585910975191]
We introduce reward-conditioned Large Language Models (LLMs) that learn from the entire spectrum of response quality within the dataset.
We propose an effective yet simple data relabeling method that conditions the preference pairs on quality scores to construct a reward-augmented dataset.
arXiv Detail & Related papers (2024-10-10T16:01:51Z) - Optimal Group Fair Classifiers from Linear Post-Processing [10.615965454674901]
We propose a post-processing algorithm for fair classification that mitigates model bias under a unified family of group fairness criteria.
It achieves fairness by re-calibrating the output score of the given base model with a "fairness cost" -- a linear combination of the (predicted) group memberships.
arXiv Detail & Related papers (2024-05-07T05:58:44Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - A Distributionally Robust Optimisation Approach to Fair Credit Scoring [2.8851756275902467]
Credit scoring has been catalogued by the European Commission and the Executive Office of the US President as a high-risk classification task.
To address this concern, recent credit scoring research has considered a range of fairness-enhancing techniques.
arXiv Detail & Related papers (2024-02-02T11:43:59Z) - Fairness in Ranking under Disparate Uncertainty [24.401219403555814]
We argue that ranking can introduce unfairness if the uncertainty of the underlying relevance model differs between groups of options.
We propose Equal-Opportunity Ranking (EOR) as a new fairness criterion for ranking.
We show that EOR corresponds to a group-wise fair lottery among the relevant options even in the presence of disparate uncertainty.
arXiv Detail & Related papers (2023-09-04T13:49:48Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Explainable Disparity Compensation for Efficient Fair Ranking [0.3759936323189418]
Ranking functions that are used in decision systems often produce disparate results for different populations because of bias in the underlying data.
Recent compensatory measures have mostly focused on opaque transformations of the ranking functions to satisfy fairness guarantees.
In this paper we propose easily explainable data-driven compensatory measures for ranking functions.
arXiv Detail & Related papers (2023-07-25T09:12:50Z) - Modeling the Q-Diversity in a Min-max Play Game for Robust Optimization [61.39201891894024]
Group distributionally robust optimization (group DRO) can minimize the worst-case loss over pre-defined groups.
We reformulate the group DRO framework by proposing Q-Diversity.
Characterized by an interactive training mode, Q-Diversity relaxes the group identification from annotation into direct parameterization.
arXiv Detail & Related papers (2023-05-20T07:02:27Z) - Mitigating Algorithmic Bias with Limited Annotations [65.060639928772]
When sensitive attributes are not disclosed or available, it is needed to manually annotate a small part of the training data to mitigate bias.
We propose Active Penalization Of Discrimination (APOD), an interactive framework to guide the limited annotations towards maximally eliminating the effect of algorithmic bias.
APOD shows comparable performance to fully annotated bias mitigation, which demonstrates that APOD could benefit real-world applications when sensitive information is limited.
arXiv Detail & Related papers (2022-07-20T16:31:19Z) - Debiasing Neural Retrieval via In-batch Balancing Regularization [25.941718123899356]
We develop a differentiable textitnormed Pairwise Ranking Fairness (nPRF) and leverage the T-statistics on top of nPRF to improve fairness.
Our method with nPRF achieves significantly less bias with minimal degradation in ranking performance compared with the baseline.
arXiv Detail & Related papers (2022-05-18T22:57:15Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.