Debiasing Neural Retrieval via In-batch Balancing Regularization
- URL: http://arxiv.org/abs/2205.09240v1
- Date: Wed, 18 May 2022 22:57:15 GMT
- Title: Debiasing Neural Retrieval via In-batch Balancing Regularization
- Authors: Yuantong Li, Xiaokai Wei, Zijian Wang, Shen Wang, Parminder Bhatia,
Xiaofei Ma, Andrew Arnold
- Abstract summary: We develop a differentiable textitnormed Pairwise Ranking Fairness (nPRF) and leverage the T-statistics on top of nPRF to improve fairness.
Our method with nPRF achieves significantly less bias with minimal degradation in ranking performance compared with the baseline.
- Score: 25.941718123899356
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: People frequently interact with information retrieval (IR) systems, however,
IR models exhibit biases and discrimination towards various demographics. The
in-processing fair ranking methods provide a trade-offs between accuracy and
fairness through adding a fairness-related regularization term in the loss
function. However, there haven't been intuitive objective functions that depend
on the click probability and user engagement to directly optimize towards this.
In this work, we propose the In-Batch Balancing Regularization (IBBR) to
mitigate the ranking disparity among subgroups. In particular, we develop a
differentiable \textit{normed Pairwise Ranking Fairness} (nPRF) and leverage
the T-statistics on top of nPRF over subgroups as a regularization to improve
fairness. Empirical results with the BERT-based neural rankers on the MS MARCO
Passage Retrieval dataset with the human-annotated non-gendered queries
benchmark \citep{rekabsaz2020neural} show that our IBBR method with nPRF
achieves significantly less bias with minimal degradation in ranking
performance compared with the baseline.
Related papers
- The Gaps between Pre-train and Downstream Settings in Bias Evaluation
and Debiasing [74.7319697510621]
In-Context Learning (ICL) induces smaller changes to PLMs compared to FT-based debiasing methods.
ICL-based debiasing methods show a higher correlation between intrinsic and extrinsic bias scores compared to FT-based methods.
arXiv Detail & Related papers (2024-01-16T17:15:08Z) - Marginal Debiased Network for Fair Visual Recognition [59.05212866862219]
We propose a novel marginal debiased network (MDN) to learn debiased representations.
Our MDN can achieve a remarkable performance on under-represented samples.
arXiv Detail & Related papers (2024-01-04T08:57:09Z) - Unbiased Learning to Rank with Biased Continuous Feedback [5.561943356123711]
Unbiased learning-to-rank(LTR) algorithms are verified to model the relative relevance accurately based on noisy feedback.
To provide personalized high-quality recommendation results, recommender systems need model both categorical and continuous biased feedback.
We introduce the pairwise trust bias to separate the position bias, trust bias, and user relevance explicitly.
Experiment results on public benchmark datasets and internal live traffic of a large-scale recommender system at Tencent News show superior results for continuous labels.
arXiv Detail & Related papers (2023-03-08T02:14:08Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Self-supervised debiasing using low rank regularization [59.84695042540525]
Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
arXiv Detail & Related papers (2022-10-11T08:26:19Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Unbiased Pairwise Learning to Rank in Recommender Systems [4.058828240864671]
Unbiased learning to rank algorithms are appealing candidates and have already been applied in many applications with single categorical labels.
We propose a novel unbiased LTR algorithm to tackle the challenges, which innovatively models position bias in the pairwise fashion.
Experiment results on public benchmark datasets and internal live traffic show the superior results of the proposed method for both categorical and continuous labels.
arXiv Detail & Related papers (2021-11-25T06:04:59Z) - Increasing Fairness in Predictions Using Bias Parity Score Based Loss
Function Regularization [0.8594140167290099]
We introduce a family of fairness enhancing regularization components that we use in conjunction with the traditional binary-cross-entropy based accuracy loss.
We deploy them in the context of a recidivism prediction task as well as on a census-based adult income dataset.
arXiv Detail & Related papers (2021-11-05T17:42:33Z) - Understanding the Effects of Adversarial Personalized Ranking
Optimization Method on Recommendation Quality [6.197934754799158]
We model the learning characteristics of the Bayesian Personalized Ranking (BPR) and APR optimization frameworks.
We show that APR amplifies the popularity bias more than BPR due to an unbalanced number of received positive updates from short-head items.
arXiv Detail & Related papers (2021-07-29T10:22:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.