Calibrating Explore-Exploit Trade-off for Fair Online Learning to Rank
- URL: http://arxiv.org/abs/2111.00735v1
- Date: Mon, 1 Nov 2021 07:22:05 GMT
- Title: Calibrating Explore-Exploit Trade-off for Fair Online Learning to Rank
- Authors: Yiling Jia, Hongning Wang
- Abstract summary: Online learning to rank (OL2R) has attracted great research interests in recent years.
We propose a general framework to achieve fairness defined by group exposure in OL2R.
In particular, when the model is exploring a set of results for relevance feedback, we confine the exploration within a subset of random permutations.
- Score: 38.28889079095716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online learning to rank (OL2R) has attracted great research interests in
recent years, thanks to its advantages in avoiding expensive relevance labeling
as required in offline supervised ranking model learning. Such a solution
explores the unknowns (e.g., intentionally present selected results on top
positions) to improve its relevance estimation. This however triggers concerns
on its ranking fairness: different groups of items might receive differential
treatments during the course of OL2R. But existing fair ranking solutions
usually require the knowledge of result relevance or a performing ranker
beforehand, which contradicts with the setting of OL2R and thus cannot be
directly applied to guarantee fairness.
In this work, we propose a general framework to achieve fairness defined by
group exposure in OL2R. The key idea is to calibrate exploration and
exploitation for fairness control, relevance learning and online ranking
quality. In particular, when the model is exploring a set of results for
relevance feedback, we confine the exploration within a subset of random
permutations, where fairness across groups is maintained while the feedback is
still unbiased. Theoretically we prove such a strategy introduces minimum
distortion in OL2R's regret to obtain fairness. Extensive empirical analysis is
performed on two public learning to rank benchmark datasets to demonstrate the
effectiveness of the proposed solution compared to existing fair OL2R
solutions.
Related papers
- FairLoRA: Unpacking Bias Mitigation in Vision Models with Fairness-Driven Low-Rank Adaptation [3.959853359438669]
We introduce FairLoRA, a novel fairness-specific regularizer for Low Rank Adaptation (LoRA)
Our results demonstrate that the need for higher ranks to mitigate bias is not universal; it depends on factors such as the pre-trained model, dataset, and task.
arXiv Detail & Related papers (2024-10-22T18:50:36Z) - Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Learning Neural Ranking Models Online from Implicit User Feedback [40.40829575021796]
We propose to learn a neural ranking model from users' implicit feedback (e.g., clicks) collected on the fly.
We focus on RankNet and LambdaRank, due to their great empirical success and wide adoption in offline settings.
arXiv Detail & Related papers (2022-01-17T23:11:39Z) - Incentives for Item Duplication under Fair Ranking Policies [69.14168955766847]
We study the behaviour of different fair ranking policies in the presence of duplicates.
We find that fairness-aware ranking policies may conflict with diversity, due to their potential to incentivize duplication more than policies solely focused on relevance.
arXiv Detail & Related papers (2021-10-29T11:11:15Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - PairRank: Online Pairwise Learning to Rank by Divide-and-Conquer [35.199462901346706]
We propose to estimate a pairwise learning to rank model online.
In each round, candidate documents are partitioned and ranked according to the model's confidence on the estimated pairwise rank order.
Regret directly defined on the number of mis-ordered pairs is proven, which connects the online solution's theoretical convergence with its expected ranking performance.
arXiv Detail & Related papers (2021-02-28T01:16:55Z) - Fairness Through Regularization for Learning to Rank [33.52974791836553]
We show how to transfer numerous fairness notions from binary classification to a learning to rank context.
Our formalism allows us to design a method for incorporating fairness objectives with provable generalization guarantees.
arXiv Detail & Related papers (2021-02-11T13:29:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.