Towards Disentangling Relevance and Bias in Unbiased Learning to Rank
- URL: http://arxiv.org/abs/2212.13937v4
- Date: Sun, 4 Jun 2023 17:38:42 GMT
- Title: Towards Disentangling Relevance and Bias in Unbiased Learning to Rank
- Authors: Yunan Zhang, Le Yan, Zhen Qin, Honglei Zhuang, Jiaming Shen, Xuanhui
Wang, Michael Bendersky, Marc Najork
- Abstract summary: Unbiased learning to rank (ULTR) studies the problem of mitigating various biases from implicit user feedback data such as clicks.
We propose three methods to mitigate the negative confounding effects by better disentangling relevance and bias.
- Score: 40.604145263955765
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Unbiased learning to rank (ULTR) studies the problem of mitigating various
biases from implicit user feedback data such as clicks, and has been receiving
considerable attention recently. A popular ULTR approach for real-world
applications uses a two-tower architecture, where click modeling is factorized
into a relevance tower with regular input features, and a bias tower with
bias-relevant inputs such as the position of a document. A successful
factorization will allow the relevance tower to be exempt from biases. In this
work, we identify a critical issue that existing ULTR methods ignored - the
bias tower can be confounded with the relevance tower via the underlying true
relevance. In particular, the positions were determined by the logging policy,
i.e., the previous production model, which would possess relevance information.
We give both theoretical analysis and empirical results to show the negative
effects on relevance tower due to such a correlation. We then propose three
methods to mitigate the negative confounding effects by better disentangling
relevance and bias. Empirical results on both controlled public datasets and a
large-scale industry dataset show the effectiveness of the proposed approaches.
Related papers
- Causal Walk: Debiasing Multi-Hop Fact Verification with Front-Door
Adjustment [27.455646975256986]
Causal Walk is a novel method for debiasing multi-hop fact verification from a causal perspective.
Results show that Causal Walk outperforms some previous debiasing methods on both existing datasets and newly constructed datasets.
arXiv Detail & Related papers (2024-03-05T06:28:02Z) - Targeted Data Augmentation for bias mitigation [0.0]
We introduce a novel and efficient approach for addressing biases called Targeted Data Augmentation (TDA)
Unlike the laborious task of removing biases, our method proposes to insert biases instead, resulting in improved performance.
To identify biases, we annotated two diverse datasets: a dataset of clinical skin lesions and a dataset of male and female faces.
arXiv Detail & Related papers (2023-08-22T12:25:49Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Whole Page Unbiased Learning to Rank [59.52040055543542]
Unbiased Learning to Rank(ULTR) algorithms are proposed to learn an unbiased ranking model with biased click data.
We propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model.
Experimental results on a real-world dataset verify the effectiveness of the BAL.
arXiv Detail & Related papers (2022-10-19T16:53:08Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Bilateral Self-unbiased Learning from Biased Implicit Feedback [10.690479112143658]
We propose a novel unbiased recommender learning model, namely BIlateral SElf-unbiased Recommender (BISER)
BISER consists of two key components: (i) self-inverse propensity weighting (SIPW) to gradually mitigate the bias of items without incurring high computational costs; and (ii) bilateral unbiased learning (BU) to bridge the gap between two complementary models in model predictions.
Extensive experiments show that BISER consistently outperforms state-of-the-art unbiased recommender models over several datasets.
arXiv Detail & Related papers (2022-07-26T05:17:42Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Unbiased Pairwise Learning to Rank in Recommender Systems [4.058828240864671]
Unbiased learning to rank algorithms are appealing candidates and have already been applied in many applications with single categorical labels.
We propose a novel unbiased LTR algorithm to tackle the challenges, which innovatively models position bias in the pairwise fashion.
Experiment results on public benchmark datasets and internal live traffic show the superior results of the proposed method for both categorical and continuous labels.
arXiv Detail & Related papers (2021-11-25T06:04:59Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.