Fairness Through Regularization for Learning to Rank
- URL: http://arxiv.org/abs/2102.05996v1
- Date: Thu, 11 Feb 2021 13:29:08 GMT
- Title: Fairness Through Regularization for Learning to Rank
- Authors: Nikola Konstantinov, Christoph H. Lampert
- Abstract summary: We show how to transfer numerous fairness notions from binary classification to a learning to rank context.
Our formalism allows us to design a method for incorporating fairness objectives with provable generalization guarantees.
- Score: 33.52974791836553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the abundance of applications of ranking in recent years, addressing
fairness concerns around automated ranking systems becomes necessary for
increasing the trust among end-users. Previous work on fair ranking has mostly
focused on application-specific fairness notions, often tailored to online
advertising, and it rarely considers learning as part of the process. In this
work, we show how to transfer numerous fairness notions from binary
classification to a learning to rank context. Our formalism allows us to design
a method for incorporating fairness objectives with provable generalization
guarantees. An extensive experimental evaluation shows that our method can
improve ranking fairness substantially with no or only little loss of model
quality.
Related papers
- A Benchmark for Fairness-Aware Graph Learning [58.515305543487386]
We present an extensive benchmark on ten representative fairness-aware graph learning methods.
Our in-depth analysis reveals key insights into the strengths and limitations of existing methods.
arXiv Detail & Related papers (2024-07-16T18:43:43Z) - Fairness in Ranking: Robustness through Randomization without the Protected Attribute [15.086941303164375]
We propose a randomized method for post-processing rankings, which do not require the availability of the protected attribute.
In an extensive numerical study, we show the robustness of our methods with respect to P-Fairness and effectiveness with respect to Normalized Discounted Cumulative Gain (NDCG) from the baseline ranking, improving on previously proposed methods.
arXiv Detail & Related papers (2024-03-28T13:50:24Z) - RankCSE: Unsupervised Sentence Representations Learning via Learning to
Rank [54.854714257687334]
We propose a novel approach, RankCSE, for unsupervised sentence representation learning.
It incorporates ranking consistency and ranking distillation with contrastive learning into a unified framework.
An extensive set of experiments are conducted on both semantic textual similarity (STS) and transfer (TR) tasks.
arXiv Detail & Related papers (2023-05-26T08:27:07Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - Fairness for Robust Learning to Rank [8.019491256870557]
We derive a new ranking system based on the first principles of distributional robustness.
We show that our approach provides better utility for highly fair rankings than existing baseline methods.
arXiv Detail & Related papers (2021-12-12T17:56:56Z) - End-to-end Learning for Fair Ranking Systems [44.82771494830451]
This paper introduces Smart Predict and Optimize for Fair Ranking (SPOFR)
SPOFR is an integrated optimization and learning framework for fairness-constrained learning to rank.
It is shown to significantly improve current state-of-the-art fair learning-to-rank systems with respect to established performance metrics.
arXiv Detail & Related papers (2021-11-21T03:25:04Z) - A Pre-processing Method for Fairness in Ranking [0.0]
We propose a fair ranking framework that evaluates the order of training data in a pairwise manner.
We show that our method outperforms the existing methods in the trade-off between accuracy and fairness over real-world datasets.
arXiv Detail & Related papers (2021-10-29T02:55:32Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.