Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and
Ex-Post Fairness
- URL: http://arxiv.org/abs/2308.13242v1
- Date: Fri, 25 Aug 2023 08:27:43 GMT
- Title: Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and
Ex-Post Fairness
- Authors: Sruthi Gorantla, Eshaan Bhansali, Amit Deshpande, Anand Louis
- Abstract summary: In learning-to-rank, optimizing only the relevance can cause representational harm to certain categories of items.
In this paper, we propose a novel algorithm that maximizes expected relevance over those rankings that satisfy given representation constraints.
- Score: 5.349671569838342
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In learning-to-rank (LTR), optimizing only the relevance (or the expected
ranking utility) can cause representational harm to certain categories of
items. Moreover, if there is implicit bias in the relevance scores, LTR models
may fail to optimize for true relevance. Previous works have proposed efficient
algorithms to train stochastic ranking models that achieve fairness of exposure
to the groups ex-ante (or, in expectation), which may not guarantee
representation fairness to the groups ex-post, that is, after realizing a
ranking from the stochastic ranking model. Typically, ex-post fairness is
achieved by post-processing, but previous work does not train stochastic
ranking models that are aware of this post-processing.
In this paper, we propose a novel objective that maximizes expected relevance
only over those rankings that satisfy given representation constraints to
ensure ex-post fairness. Building upon recent work on an efficient sampler for
ex-post group-fair rankings, we propose a group-fair Plackett-Luce model and
show that it can be efficiently optimized for our objective in the LTR
framework.
Experiments on three real-world datasets show that our group-fair algorithm
guarantees fairness alongside usually having better relevance compared to the
LTR baselines. In addition, our algorithm also achieves better relevance than
post-processing baselines, which also ensures ex-post fairness. Further, when
implicit bias is injected into the training data, our algorithm typically
outperforms existing LTR baselines in relevance.
Related papers
- Preference Learning Algorithms Do Not Learn Preference Rankings [62.335733662381884]
We study the conventional wisdom that preference learning trains models to assign higher likelihoods to more preferred outputs than less preferred outputs.
We find that most state-of-the-art preference-tuned models achieve a ranking accuracy of less than 60% on common preference datasets.
arXiv Detail & Related papers (2024-05-29T21:29:44Z) - Optimal Group Fair Classifiers from Linear Post-Processing [10.615965454674901]
We propose a post-processing algorithm for fair classification that mitigates model bias under a unified family of group fairness criteria.
It achieves fairness by re-calibrating the output score of the given base model with a "fairness cost" -- a linear combination of the (predicted) group memberships.
arXiv Detail & Related papers (2024-05-07T05:58:44Z) - Estimating the Hessian Matrix of Ranking Objectives for Stochastic Learning to Rank with Gradient Boosted Trees [63.18324983384337]
We introduce the first learning to rank method for Gradient Boosted Decision Trees (GBDTs)
Our main contribution is a novel estimator for the second-order derivatives, i.e., the Hessian matrix.
We incorporate our estimator into the existing PL-Rank framework, which was originally designed for first-order derivatives only.
arXiv Detail & Related papers (2024-04-18T13:53:32Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Inference-time Stochastic Ranking with Risk Control [19.20938164194589]
Learning to Rank methods are vital in online economies, affecting users and item providers.
We propose a novel method that performs ranking at inference time with guanranteed utility or fairness given pretrained scoring functions.
arXiv Detail & Related papers (2023-06-12T15:44:58Z) - Zero-Shot Listwise Document Reranking with a Large Language Model [58.64141622176841]
We propose Listwise Reranker with a Large Language Model (LRL), which achieves strong reranking effectiveness without using any task-specific training data.
Experiments on three TREC web search datasets demonstrate that LRL not only outperforms zero-shot pointwise methods when reranking first-stage retrieval results, but can also act as a final-stage reranker.
arXiv Detail & Related papers (2023-05-03T14:45:34Z) - Individually Fair Ranking [23.95661284311917]
We develop an algorithm to train individually fair learning-to-rank models.
The proposed approach ensures items from minority groups appear alongside similar items from majority groups.
arXiv Detail & Related papers (2021-03-19T21:17:11Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.