Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment
- URL: http://arxiv.org/abs/2307.14668v1
- Date: Thu, 27 Jul 2023 07:42:44 GMT
- Title: Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment
- Authors: Sen Cui, Weishen Pan, Changshui Zhang, Fei Wang
- Abstract summary: We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
- Score: 54.179859639868646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic fairness has been a serious concern and received lots of interest
in machine learning community. In this paper, we focus on the bipartite ranking
scenario, where the instances come from either the positive or negative class
and the goal is to learn a ranking function that ranks positive instances
higher than negative ones. While there could be a trade-off between fairness
and performance, we propose a model agnostic post-processing framework xOrder
for achieving fairness in bipartite ranking and maintaining the algorithm
classification performance. In particular, we optimize a weighted sum of the
utility as identifying an optimal warping path across different protected
groups and solve it through a dynamic programming process. xOrder is compatible
with various classification models and ranking fairness metrics, including
supervised and unsupervised fairness metrics. In addition to binary groups,
xOrder can be applied to multiple protected groups. We evaluate our proposed
algorithm on four benchmark data sets and two real-world patient electronic
health record repositories. xOrder consistently achieves a better balance
between the algorithm utility and ranking fairness on a variety of datasets
with different metrics. From the visualization of the calibrated ranking
scores, xOrder mitigates the score distribution shifts of different groups
compared with baselines. Moreover, additional analytical results verify that
xOrder achieves a robust performance when faced with fewer samples and a bigger
difference between training and testing ranking score distributions.
Related papers
- A structured regression approach for evaluating model performance across intersectional subgroups [53.91682617836498]
Disaggregated evaluation is a central task in AI fairness assessment, where the goal is to measure an AI system's performance across different subgroups.
We introduce a structured regression approach to disaggregated evaluation that we demonstrate can yield reliable system performance estimates even for very small subgroups.
arXiv Detail & Related papers (2024-01-26T14:21:45Z) - Matched Pair Calibration for Ranking Fairness [2.580183306478581]
We propose a test of fairness in score-based ranking systems called matched pair calibration.
We show how our approach generalizes the fairness intuitions of calibration from a binary classification setting to ranking.
arXiv Detail & Related papers (2023-06-06T15:32:30Z) - Revisiting Long-tailed Image Classification: Survey and Benchmarks with
New Evaluation Metrics [88.39382177059747]
A corpus of metrics is designed for measuring the accuracy, robustness, and bounds of algorithms for learning with long-tailed distribution.
Based on our benchmarks, we re-evaluate the performance of existing methods on CIFAR10 and CIFAR100 datasets.
arXiv Detail & Related papers (2023-02-03T02:40:54Z) - Learning by Sorting: Self-supervised Learning with Group Ordering
Constraints [75.89238437237445]
This paper proposes a new variation of the contrastive learning objective, Group Ordering Constraints (GroCo)
It exploits the idea of sorting the distances of positive and negative pairs and computing the respective loss based on how many positive pairs have a larger distance than the negative pairs, and thus are not ordered correctly.
We evaluate the proposed formulation on various self-supervised learning benchmarks and show that it not only leads to improved results compared to vanilla contrastive learning but also shows competitive performance to comparable methods in linear probing and outperforms current methods in k-NN performance.
arXiv Detail & Related papers (2023-01-05T11:17:55Z) - Fair and Optimal Classification via Post-Processing [10.163721748735801]
This paper provides a complete characterization of the inherent tradeoff of demographic parity on classification problems.
We show that the minimum error rate achievable by randomized and attribute-aware fair classifiers is given by the optimal value of a Wasserstein-barycenter problem.
arXiv Detail & Related papers (2022-11-03T00:04:04Z) - Ensemble Classifier Design Tuned to Dataset Characteristics for Network
Intrusion Detection [0.0]
Two new algorithms are proposed to address the class overlap issue in the dataset.
The proposed design is evaluated for both binary and multi-category classification.
arXiv Detail & Related papers (2022-05-08T21:06:42Z) - Heuristic Search for Rank Aggregation with Application to Label Ranking [16.275063634853584]
We propose an effective hybrid evolutionary ranking algorithm to solve the rank aggregation problem.
The algorithm features a semantic crossover based on concordant pairs and a late acceptance local search reinforced by an efficient incremental evaluation technique.
Experiments are conducted to assess the algorithm, indicating a highly competitive performance on benchmark instances.
arXiv Detail & Related papers (2022-01-11T11:43:17Z) - PiRank: Learning To Rank via Differentiable Sorting [85.28916333414145]
We propose PiRank, a new class of differentiable surrogates for ranking.
We show that PiRank exactly recovers the desired metrics in the limit of zero temperature.
arXiv Detail & Related papers (2020-12-12T05:07:36Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.