Fairness in Ranking: Robustness through Randomization without the Protected Attribute
- URL: http://arxiv.org/abs/2403.19419v1
- Date: Thu, 28 Mar 2024 13:50:24 GMT
- Title: Fairness in Ranking: Robustness through Randomization without the Protected Attribute
- Authors: Andrii Kliachkin, Eleni Psaroudaki, Jakub Marecek, Dimitris Fotakis,
- Abstract summary: We propose a randomized method for post-processing rankings, which do not require the availability of the protected attribute.
In an extensive numerical study, we show the robustness of our methods with respect to P-Fairness and effectiveness with respect to Normalized Discounted Cumulative Gain (NDCG) from the baseline ranking, improving on previously proposed methods.
- Score: 15.086941303164375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been great interest in fairness in machine learning, especially in relation to classification problems. In ranking-related problems, such as in online advertising, recommender systems, and HR automation, much work on fairness remains to be done. Two complications arise: first, the protected attribute may not be available in many applications. Second, there are multiple measures of fairness of rankings, and optimization-based methods utilizing a single measure of fairness of rankings may produce rankings that are unfair with respect to other measures. In this work, we propose a randomized method for post-processing rankings, which do not require the availability of the protected attribute. In an extensive numerical study, we show the robustness of our methods with respect to P-Fairness and effectiveness with respect to Normalized Discounted Cumulative Gain (NDCG) from the baseline ranking, improving on previously proposed methods.
Related papers
- Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Anti-Exploration by Random Network Distillation [63.04360288089277]
We show that a naive choice of conditioning for the Random Network Distillation (RND) is not discriminative enough to be used as an uncertainty estimator.
We show that this limitation can be avoided with conditioning based on Feature-wise Linear Modulation (FiLM)
We evaluate it on the D4RL benchmark, showing that it is capable of achieving performance comparable to ensemble-based methods and outperforming ensemble-free approaches by a wide margin.
arXiv Detail & Related papers (2023-01-31T13:18:33Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - MANI-Rank: Multiple Attribute and Intersectional Group Fairness for
Consensus Ranking [6.231376714841276]
Group fairness in rankings and in particular rank aggregation remains in its infancy.
Recent work introduced the concept of fair rank aggregation for combining rankings but restricted to the case when candidates have a single binary protected attribute.
Yet it remains an open problem how to create a consensus ranking that represents the preferences of all rankers.
We are the first to define and solve this open Multi-attribute Fair Consensus Ranking problem.
arXiv Detail & Related papers (2022-07-20T16:36:20Z) - Recommendation Systems with Distribution-Free Reliability Guarantees [83.80644194980042]
We show how to return a set of items rigorously guaranteed to contain mostly good items.
Our procedure endows any ranking model with rigorous finite-sample control of the false discovery rate.
We evaluate our methods on the Yahoo! Learning to Rank and MSMarco datasets.
arXiv Detail & Related papers (2022-07-04T17:49:25Z) - Fairness for Robust Learning to Rank [8.019491256870557]
We derive a new ranking system based on the first principles of distributional robustness.
We show that our approach provides better utility for highly fair rankings than existing baseline methods.
arXiv Detail & Related papers (2021-12-12T17:56:56Z) - A Pre-processing Method for Fairness in Ranking [0.0]
We propose a fair ranking framework that evaluates the order of training data in a pairwise manner.
We show that our method outperforms the existing methods in the trade-off between accuracy and fairness over real-world datasets.
arXiv Detail & Related papers (2021-10-29T02:55:32Z) - Estimation of Fair Ranking Metrics with Incomplete Judgments [70.37717864975387]
We propose a sampling strategy and estimation technique for four fair ranking metrics.
We formulate a robust and unbiased estimator which can operate even with very limited number of labeled items.
arXiv Detail & Related papers (2021-08-11T10:57:00Z) - Fairness Through Regularization for Learning to Rank [33.52974791836553]
We show how to transfer numerous fairness notions from binary classification to a learning to rank context.
Our formalism allows us to design a method for incorporating fairness objectives with provable generalization guarantees.
arXiv Detail & Related papers (2021-02-11T13:29:08Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.