Bilateral Self-unbiased Learning from Biased Implicit Feedback
- URL: http://arxiv.org/abs/2207.12660v1
- Date: Tue, 26 Jul 2022 05:17:42 GMT
- Title: Bilateral Self-unbiased Learning from Biased Implicit Feedback
- Authors: Jae-woong Lee, Seongmin Park, Joonseok Lee, and Jongwuk Lee
- Abstract summary: We propose a novel unbiased recommender learning model, namely BIlateral SElf-unbiased Recommender (BISER)
BISER consists of two key components: (i) self-inverse propensity weighting (SIPW) to gradually mitigate the bias of items without incurring high computational costs; and (ii) bilateral unbiased learning (BU) to bridge the gap between two complementary models in model predictions.
Extensive experiments show that BISER consistently outperforms state-of-the-art unbiased recommender models over several datasets.
- Score: 10.690479112143658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit feedback has been widely used to build commercial recommender
systems. Because observed feedback represents users' click logs, there is a
semantic gap between true relevance and observed feedback. More importantly,
observed feedback is usually biased towards popular items, thereby
overestimating the actual relevance of popular items. Although existing studies
have developed unbiased learning methods using inverse propensity weighting
(IPW) or causal reasoning, they solely focus on eliminating the popularity bias
of items. In this paper, we propose a novel unbiased recommender learning
model, namely BIlateral SElf-unbiased Recommender (BISER), to eliminate the
exposure bias of items caused by recommender models. Specifically, BISER
consists of two key components: (i) self-inverse propensity weighting (SIPW) to
gradually mitigate the bias of items without incurring high computational
costs; and (ii) bilateral unbiased learning (BU) to bridge the gap between two
complementary models in model predictions, i.e., user- and item-based
autoencoders, alleviating the high variance of SIPW. Extensive experiments show
that BISER consistently outperforms state-of-the-art unbiased recommender
models over several datasets, including Coat, Yahoo! R3, MovieLens, and
CiteULike.
Related papers
- Debiased Contrastive Representation Learning for Mitigating Dual Biases in Recommender Systems [20.559573838679853]
In recommender systems, popularity and conformity biases undermine recommender effectiveness.
We build a causal graph to address both biases and describe the abstract data generation mechanism.
Then, we use it as a guide to develop a novel Debiased Contrastive Learning framework for Mitigating Dual Biases.
arXiv Detail & Related papers (2024-08-19T02:12:40Z) - Going Beyond Popularity and Positivity Bias: Correcting for Multifactorial Bias in Recommender Systems [74.47680026838128]
Two typical forms of bias in user interaction data with recommender systems (RSs) are popularity bias and positivity bias.
We consider multifactorial selection bias affected by both item and rating value factors.
We propose smoothing and alternating gradient descent techniques to reduce variance and improve the robustness of its optimization.
arXiv Detail & Related papers (2024-04-29T12:18:21Z) - Debiased Model-based Interactive Recommendation [22.007617148466807]
We develop a model called textbfidentifiable textbfDebiased textbfModel-based textbfInteractive textbfRecommendation (textbfiDMIR in short)
For the first drawback, we devise a debiased causal world model based on the causal mechanism of the time-varying recommendation generation process with identification guarantees.
For the second drawback, we devise a debiased contrastive policy, which coincides with the debiased contrastive learning and avoids sampling bias
arXiv Detail & Related papers (2024-02-24T14:10:04Z) - GPTBIAS: A Comprehensive Framework for Evaluating Bias in Large Language
Models [83.30078426829627]
Large language models (LLMs) have gained popularity and are being widely adopted by a large user community.
The existing evaluation methods have many constraints, and their results exhibit a limited degree of interpretability.
We propose a bias evaluation framework named GPTBIAS that leverages the high performance of LLMs to assess bias in models.
arXiv Detail & Related papers (2023-12-11T12:02:14Z) - Unbiased Learning to Rank with Biased Continuous Feedback [5.561943356123711]
Unbiased learning-to-rank(LTR) algorithms are verified to model the relative relevance accurately based on noisy feedback.
To provide personalized high-quality recommendation results, recommender systems need model both categorical and continuous biased feedback.
We introduce the pairwise trust bias to separate the position bias, trust bias, and user relevance explicitly.
Experiment results on public benchmark datasets and internal live traffic of a large-scale recommender system at Tencent News show superior results for continuous labels.
arXiv Detail & Related papers (2023-03-08T02:14:08Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Deep Causal Reasoning for Recommendations [47.83224399498504]
A new trend in recommender system research is to negate the influence of confounders from a causal perspective.
We model the recommendation as a multi-cause multi-outcome (MCMO) inference problem.
We show that MCMO modeling may lead to high variance due to scarce observations associated with the high-dimensional causal space.
arXiv Detail & Related papers (2022-01-06T15:00:01Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Debiased Explainable Pairwise Ranking from Implicit Feedback [0.3867363075280543]
We focus on the state of the art pairwise ranking model, Bayesian Personalized Ranking (BPR)
BPR is a black box model that does not explain its outputs, thus limiting the user's trust in the recommendations.
We propose a novel explainable loss function and a corresponding Matrix Factorization-based model that generates recommendations along with item-based explanations.
arXiv Detail & Related papers (2021-07-30T17:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.