End-to-end Learning for Fair Ranking Systems
- URL: http://arxiv.org/abs/2111.10723v1
- Date: Sun, 21 Nov 2021 03:25:04 GMT
- Title: End-to-end Learning for Fair Ranking Systems
- Authors: James Kotary, Ferdinando Fioretto, Pascal Van Hentenryck, Ziwei Zhu
- Abstract summary: This paper introduces Smart Predict and Optimize for Fair Ranking (SPOFR)
SPOFR is an integrated optimization and learning framework for fairness-constrained learning to rank.
It is shown to significantly improve current state-of-the-art fair learning-to-rank systems with respect to established performance metrics.
- Score: 44.82771494830451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The learning-to-rank problem aims at ranking items to maximize exposure of
those most relevant to a user query. A desirable property of such ranking
systems is to guarantee some notion of fairness among specified item groups.
While fairness has recently been considered in the context of learning-to-rank
systems, current methods cannot provide guarantees on the fairness of the
proposed ranking policies.
This paper addresses this gap and introduces Smart Predict and Optimize for
Fair Ranking (SPOFR), an integrated optimization and learning framework for
fairness-constrained learning to rank. The end-to-end SPOFR framework includes
a constrained optimization sub-model and produces ranking policies that are
guaranteed to satisfy fairness constraints while allowing for fine control of
the fairness-utility tradeoff. SPOFR is shown to significantly improve current
state-of-the-art fair learning-to-rank systems with respect to established
performance metrics.
Related papers
- Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Fairness for Robust Learning to Rank [8.019491256870557]
We derive a new ranking system based on the first principles of distributional robustness.
We show that our approach provides better utility for highly fair rankings than existing baseline methods.
arXiv Detail & Related papers (2021-12-12T17:56:56Z) - A Pre-processing Method for Fairness in Ranking [0.0]
We propose a fair ranking framework that evaluates the order of training data in a pairwise manner.
We show that our method outperforms the existing methods in the trade-off between accuracy and fairness over real-world datasets.
arXiv Detail & Related papers (2021-10-29T02:55:32Z) - Balancing Accuracy and Fairness for Interactive Recommendation with
Reinforcement Learning [68.25805655688876]
Fairness in recommendation has attracted increasing attention due to bias and discrimination possibly caused by traditional recommenders.
We propose a reinforcement learning based framework, FairRec, to dynamically maintain a long-term balance between accuracy and fairness in IRS.
Extensive experiments validate that FairRec can improve fairness, while preserving good recommendation quality.
arXiv Detail & Related papers (2021-06-25T02:02:51Z) - Societal Biases in Retrieved Contents: Measurement Framework and
Adversarial Mitigation for BERT Rankers [9.811131801693856]
We provide a novel framework to measure the fairness in the retrieved text contents of ranking models.
We propose an adversarial bias mitigation approach applied to the state-of-the-art Bert rankers.
Our results on the MS MARCO benchmark show that, while the fairness of all ranking models is lower than the ones of ranker-agnostic baselines, the fairness in retrieved contents significantly improves when applying the proposed adversarial training.
arXiv Detail & Related papers (2021-04-28T08:53:54Z) - Fairness Through Regularization for Learning to Rank [33.52974791836553]
We show how to transfer numerous fairness notions from binary classification to a learning to rank context.
Our formalism allows us to design a method for incorporating fairness objectives with provable generalization guarantees.
arXiv Detail & Related papers (2021-02-11T13:29:08Z) - Overview of the TREC 2019 Fair Ranking Track [65.15263872493799]
The goal of the TREC Fair Ranking track was to develop a benchmark for evaluating retrieval systems in terms of fairness to different content providers.
This paper presents an overview of the track, including the task definition, descriptions of the data and the annotation process.
arXiv Detail & Related papers (2020-03-25T21:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.