Handling Position Bias for Unbiased Learning to Rank in Hotels Search
- URL: http://arxiv.org/abs/2002.12528v1
- Date: Fri, 28 Feb 2020 03:48:42 GMT
- Title: Handling Position Bias for Unbiased Learning to Rank in Hotels Search
- Authors: Yinxiao Li
- Abstract summary: We will investigate the importance of properly handling the position bias in an online test environment in Tripadvisor Hotels search.
We propose an empirically effective method of handling the position bias that fully leverages the user action data.
The online A/B test results show that this method leads to an improved search ranking model.
- Score: 0.951828574518325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, search ranking and recommendation systems rely on a lot of data to
train machine learning models such as Learning-to-Rank (LTR) models to rank
results for a given query, and implicit user feedbacks (e.g. click data) have
become the dominant source of data collection due to its abundance and low
cost, especially for major Internet companies. However, a drawback of this data
collection approach is the data could be highly biased, and one of the most
significant biases is the position bias, where users are biased towards
clicking on higher ranked results. In this work, we will investigate the
marginal importance of properly handling the position bias in an online test
environment in Tripadvisor Hotels search. We propose an empirically effective
method of handling the position bias that fully leverages the user action data.
We take advantage of the fact that when user clicks a result, he has almost
certainly observed all the results above, and the propensities of the results
below the clicked result will be estimated by a simple but effective position
bias model. The online A/B test results show that this method leads to an
improved search ranking model.
Related papers
- Contextual Dual Learning Algorithm with Listwise Distillation for Unbiased Learning to Rank [26.69630281310365]
Unbiased Learning to Rank (ULTR) aims to leverage biased implicit user feedback (e.g., click) to optimize an unbiased ranking model.
We propose a Contextual Dual Learning Algorithm with Listwise Distillation (CDLA-LD) to address both position bias and contextual bias.
arXiv Detail & Related papers (2024-08-19T09:13:52Z) - Unbiased Learning to Rank Meets Reality: Lessons from Baidu's Large-Scale Search Dataset [48.708591046906896]
Unbiased learning-to-rank (ULTR) is a well-established framework for learning from user clicks.
We revisit and extend the available experiments on the Baidu-ULTR dataset.
We find that standard unbiased learning-to-rank techniques robustly improve click predictions but struggle to consistently improve ranking performance.
arXiv Detail & Related papers (2024-04-03T08:00:46Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Feature-Level Debiased Natural Language Understanding [86.8751772146264]
Existing natural language understanding (NLU) models often rely on dataset biases to achieve high performance on specific datasets.
We propose debiasing contrastive learning (DCT) to mitigate biased latent features and neglect the dynamic nature of bias.
DCT outperforms state-of-the-art baselines on out-of-distribution datasets while maintaining in-distribution performance.
arXiv Detail & Related papers (2022-12-11T06:16:14Z) - Whole Page Unbiased Learning to Rank [59.52040055543542]
Unbiased Learning to Rank(ULTR) algorithms are proposed to learn an unbiased ranking model with biased click data.
We propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model.
Experimental results on a real-world dataset verify the effectiveness of the BAL.
arXiv Detail & Related papers (2022-10-19T16:53:08Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - AutoDebias: Learning to Debias for Recommendation [43.84313723394282]
We propose textitAotoDebias that leverages another (small) set of uniform data to optimize the debiasing parameters.
We derive the generalization bound for AutoDebias and prove its ability to acquire the appropriate debiasing strategy.
arXiv Detail & Related papers (2021-05-10T08:03:48Z) - Mitigating the Position Bias of Transformer Models in Passage Re-Ranking [12.526786110360622]
Supervised machine learning models and their evaluation strongly depends on the quality of the underlying dataset.
We observe a bias in the position of the correct answer in the text in two popular Question Answering datasets used for passage re-ranking.
We demonstrate that by mitigating the position bias, Transformer-based re-ranking models are equally effective on a biased and debiased dataset.
arXiv Detail & Related papers (2021-01-18T10:38:03Z) - Towards Robustifying NLI Models Against Lexical Dataset Biases [94.79704960296108]
This paper explores both data-level and model-level debiasing methods to robustify models against lexical dataset biases.
First, we debias the dataset through data augmentation and enhancement, but show that the model bias cannot be fully removed via this method.
The second approach employs a bag-of-words sub-model to capture the features that are likely to exploit the bias and prevents the original model from learning these biased features.
arXiv Detail & Related papers (2020-05-10T17:56:10Z) - Eliminating Search Intent Bias in Learning to Rank [0.32228025627337864]
We study how differences in user search intent can influence click activities and determined that there exists a bias between user search intent and the relevance of the document relevance.
We propose a search intent bias hypothesis that can be applied to most existing click models to improve their ability to learn unbiased relevance.
arXiv Detail & Related papers (2020-02-08T17:07:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.