A Pre-processing Method for Fairness in Ranking
- URL: http://arxiv.org/abs/2110.15503v1
- Date: Fri, 29 Oct 2021 02:55:32 GMT
- Title: A Pre-processing Method for Fairness in Ranking
- Authors: Ryosuke Sonoda
- Abstract summary: We propose a fair ranking framework that evaluates the order of training data in a pairwise manner.
We show that our method outperforms the existing methods in the trade-off between accuracy and fairness over real-world datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fair ranking problems arise in many decision-making processes that often
necessitate a trade-off between accuracy and fairness.
Many existing studies have proposed correction methods such as adding
fairness constraints to a ranking model's loss.
However, the challenge of correcting the data bias for fair ranking remains,
and the trade-off of the ranking models leaves room for improvement.
In this paper, we propose a fair ranking framework that evaluates the order
of training data in a pairwise manner as well as various fairness measurements
in ranking.
This study is the first proposal of a pre-processing method that solves fair
ranking problems using the pairwise ordering method with our best knowledge.
The fair pairwise ordering method is prominent in training the fair ranking
models because it ensures that the resulting ranking likely becomes parity
across groups.
As far as the fairness measurements in ranking are represented as a linear
constraint of the ranking models, we proved that the minimization of loss
function subject to the constraints is reduced to the closed solution of the
minimization problem augmented by weights to training data.
This closed solution inspires us to present a practical and stable algorithm
that iterates the optimization of weights and model parameters.
The empirical results over real-world datasets demonstrated that our method
outperforms the existing methods in the trade-off between accuracy and fairness
over real-world datasets and various fairness measurements.
Related papers
- Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium [0.3350491650545292]
Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness.
We propose a novel methodology grounded in bilevel optimization principles.
Our deep learning-based approach concurrently optimize for both accuracy and fairness objectives.
arXiv Detail & Related papers (2024-10-21T18:53:39Z) - Learn to be Fair without Labels: a Distribution-based Learning Framework for Fair Ranking [1.8577028544235155]
We propose a distribution-based fair learning framework (DLF) that does not require labels by replacing the unavailable fairness labels with target fairness exposure distributions.
Our proposed framework achieves better fairness performance while maintaining better control over the fairness-relevance trade-off.
arXiv Detail & Related papers (2024-05-28T03:49:04Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - Improving Fair Training under Correlation Shifts [33.385118640843416]
In particular, when the bias between labels and sensitive groups changes, the fairness of the trained model is directly influenced and can worsen.
We analytically show that existing in-processing fair algorithms have fundamental limits in accuracy and group fairness.
We propose a novel pre-processing step that samples the input data to reduce correlation shifts.
arXiv Detail & Related papers (2023-02-05T07:23:35Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - PiRank: Learning To Rank via Differentiable Sorting [85.28916333414145]
We propose PiRank, a new class of differentiable surrogates for ranking.
We show that PiRank exactly recovers the desired metrics in the limit of zero temperature.
arXiv Detail & Related papers (2020-12-12T05:07:36Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.