Fairness for Robust Learning to Rank
- URL: http://arxiv.org/abs/2112.06288v1
- Date: Sun, 12 Dec 2021 17:56:56 GMT
- Title: Fairness for Robust Learning to Rank
- Authors: Omid Memarrast, Ashkan Rezaei, Rizal Fathony, Brian Ziebart
- Abstract summary: We derive a new ranking system based on the first principles of distributional robustness.
We show that our approach provides better utility for highly fair rankings than existing baseline methods.
- Score: 8.019491256870557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While conventional ranking systems focus solely on maximizing the utility of
the ranked items to users, fairness-aware ranking systems additionally try to
balance the exposure for different protected attributes such as gender or race.
To achieve this type of group fairness for ranking, we derive a new ranking
system based on the first principles of distributional robustness. We formulate
a minimax game between a player choosing a distribution over rankings to
maximize utility while satisfying fairness constraints against an adversary
seeking to minimize utility while matching statistics of the training data. We
show that our approach provides better utility for highly fair rankings than
existing baseline methods.
Related papers
- Fairness in Ranking: Robustness through Randomization without the Protected Attribute [15.086941303164375]
We propose a randomized method for post-processing rankings, which do not require the availability of the protected attribute.
In an extensive numerical study, we show the robustness of our methods with respect to P-Fairness and effectiveness with respect to Normalized Discounted Cumulative Gain (NDCG) from the baseline ranking, improving on previously proposed methods.
arXiv Detail & Related papers (2024-03-28T13:50:24Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - A Minimaximalist Approach to Reinforcement Learning from Human Feedback [49.45285664482369]
We present Self-Play Preference Optimization (SPO), an algorithm for reinforcement learning from human feedback.
Our approach is minimalist in that it does not require training a reward model nor unstable adversarial training.
We demonstrate that on a suite of continuous control tasks, we are able to learn significantly more efficiently than reward-model based approaches.
arXiv Detail & Related papers (2024-01-08T17:55:02Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - End-to-end Learning for Fair Ranking Systems [44.82771494830451]
This paper introduces Smart Predict and Optimize for Fair Ranking (SPOFR)
SPOFR is an integrated optimization and learning framework for fairness-constrained learning to rank.
It is shown to significantly improve current state-of-the-art fair learning-to-rank systems with respect to established performance metrics.
arXiv Detail & Related papers (2021-11-21T03:25:04Z) - Individually Fair Ranking [23.95661284311917]
We develop an algorithm to train individually fair learning-to-rank models.
The proposed approach ensures items from minority groups appear alongside similar items from majority groups.
arXiv Detail & Related papers (2021-03-19T21:17:11Z) - Fairness Through Regularization for Learning to Rank [33.52974791836553]
We show how to transfer numerous fairness notions from binary classification to a learning to rank context.
Our formalism allows us to design a method for incorporating fairness objectives with provable generalization guarantees.
arXiv Detail & Related papers (2021-02-11T13:29:08Z) - On the Problem of Underranking in Group-Fair Ranking [8.963918049835375]
Bias in ranking systems can worsen social and economic inequalities, polarize opinions, and reinforce stereotypes.
In this paper, we formulate the problem of underranking in group-fair rankings, which was not addressed in previous work.
We give a fair ranking algorithm that takes any given ranking and outputs another ranking with simultaneous underranking and group fairness guarantees.
arXiv Detail & Related papers (2020-09-24T14:56:10Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.