Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility
- URL: http://arxiv.org/abs/2006.08267v4
- Date: Mon, 7 Jun 2021 09:26:05 GMT
- Title: Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility
- Authors: Sen Cui, Weishen Pan, Changshui Zhang, Fei Wang
- Abstract summary: Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
- Score: 54.179859639868646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bipartite ranking, which aims to learn a scoring function that ranks positive
individuals higher than negative ones from labeled data, is widely adopted in
various applications where sample prioritization is needed. Recently, there
have been rising concerns on whether the learned scoring function can cause
systematic disparity across different protected groups defined by sensitive
attributes. While there could be trade-off between fairness and performance, in
this paper we propose a model agnostic post-processing framework for balancing
them in the bipartite ranking scenario. Specifically, we maximize a weighted
sum of the utility and fairness by directly adjusting the relative ordering of
samples across groups. By formulating this problem as the identification of an
optimal warping path across different protected groups, we propose a
non-parametric method to search for such an optimal path through a dynamic
programming process. Our method is compatible with various classification
models and applicable to a variety of ranking fairness metrics. Comprehensive
experiments on a suite of benchmark data sets and two real-world patient
electronic health record repositories show that our method can achieve a great
balance between the algorithm utility and ranking fairness. Furthermore, we
experimentally verify the robustness of our method when faced with the fewer
training samples and the difference between training and testing ranking score
distributions.
Related papers
- Different Horses for Different Courses: Comparing Bias Mitigation Algorithms in ML [9.579645248339004]
We show significant variance in fairness achieved by several algorithms and the influence of the learning pipeline on fairness scores.
We highlight that most bias mitigation techniques can achieve comparable performance.
We hope our work encourages future research on how various choices in the lifecycle of developing an algorithm impact fairness.
arXiv Detail & Related papers (2024-11-17T15:17:08Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Matched Pair Calibration for Ranking Fairness [2.580183306478581]
We propose a test of fairness in score-based ranking systems called matched pair calibration.
We show how our approach generalizes the fairness intuitions of calibration from a binary classification setting to ranking.
arXiv Detail & Related papers (2023-06-06T15:32:30Z) - Revisiting Long-tailed Image Classification: Survey and Benchmarks with
New Evaluation Metrics [88.39382177059747]
A corpus of metrics is designed for measuring the accuracy, robustness, and bounds of algorithms for learning with long-tailed distribution.
Based on our benchmarks, we re-evaluate the performance of existing methods on CIFAR10 and CIFAR100 datasets.
arXiv Detail & Related papers (2023-02-03T02:40:54Z) - Fair and Optimal Classification via Post-Processing [10.163721748735801]
This paper provides a complete characterization of the inherent tradeoff of demographic parity on classification problems.
We show that the minimum error rate achievable by randomized and attribute-aware fair classifiers is given by the optimal value of a Wasserstein-barycenter problem.
arXiv Detail & Related papers (2022-11-03T00:04:04Z) - Multiple-criteria Based Active Learning with Fixed-size Determinantal
Point Processes [43.71112693633952]
We introduce a multiple-criteria based active learning algorithm, which incorporates three complementary criteria, i.e., informativeness, representativeness and diversity.
We show that our method performs significantly better and is more stable than other multiple-criteria based AL algorithms.
arXiv Detail & Related papers (2021-07-04T13:22:54Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.