Probabilistic and Variational Recommendation Denoising
- URL: http://arxiv.org/abs/2105.09605v1
- Date: Thu, 20 May 2021 08:59:44 GMT
- Title: Probabilistic and Variational Recommendation Denoising
- Authors: Yu Wang, Xin Xin, Zaiqiao Meng, Xiangnan He, Joemon Jose, Fuli Feng
- Abstract summary: Learning from implicit feedback is one of the most common cases in the application of recommender systems.
We propose probabilistic and variational recommendation denoising for implicit feedback.
We employ the proposed DPI and DVAE on four state-of-the-art recommendation models and conduct experiments on three datasets.
- Score: 56.879165033014026
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning from implicit feedback is one of the most common cases in the
application of recommender systems. Generally speaking, interacted examples are
considered as positive while negative examples are sampled from uninteracted
ones. However, noisy examples are prevalent in real-world implicit feedback. A
noisy positive example could be interacted but it actually leads to negative
user preference. A noisy negative example which is uninteracted because of
unawareness of the user could also denote potential positive user preference.
Conventional training methods overlook these noisy examples, leading to
sub-optimal recommendation. In this work, we propose probabilistic and
variational recommendation denoising for implicit feedback. Through an
empirical study, we find that different models make relatively similar
predictions on clean examples which denote the real user preference, while the
predictions on noisy examples vary much more across different models. Motivated
by this observation, we propose denoising with probabilistic inference (DPI)
which aims to minimize the KL-divergence between the real user preference
distributions parameterized by two recommendation models while maximize the
likelihood of data observation. We then show that DPI recovers the evidence
lower bound of an variational auto-encoder when the real user preference is
considered as the latent variables. This leads to our second learning framework
denoising with variational autoencoder (DVAE). We employ the proposed DPI and
DVAE on four state-of-the-art recommendation models and conduct experiments on
three datasets. Experimental results demonstrate that DPI and DVAE
significantly improve recommendation performance compared with normal training
and other denoising methods. Codes will be open-sourced.
Related papers
- Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization [60.176008034221404]
Direct Preference Optimization (DPO) and its variants are increasingly used for aligning language models with human preferences.
Prior work has observed that the likelihood of preferred responses often decreases during training.
We demonstrate that likelihood displacement can be catastrophic, shifting probability mass from preferred responses to responses with an opposite meaning.
arXiv Detail & Related papers (2024-10-11T14:22:44Z) - Large Language Model Enhanced Hard Sample Identification for Denoising Recommendation [4.297249011611168]
Implicit feedback is often used to build recommender systems.
Previous studies have attempted to alleviate this by identifying noisy samples based on their diverged patterns.
We propose a Large Language Model Enhanced Hard Sample Denoising framework.
arXiv Detail & Related papers (2024-09-16T14:57:09Z) - Debiased Recommendation with Noisy Feedback [41.38490962524047]
We study intersectional threats to the unbiased learning of the prediction model from data MNAR and OME in the collected data.
First, we design OME-EIB, OME-IPS, and OME-DR estimators, which largely extend the existing estimators to combat OME in real-world recommendation scenarios.
arXiv Detail & Related papers (2024-06-24T23:42:18Z) - Double Correction Framework for Denoising Recommendation [45.98207284259792]
In implicit feedback, noisy samples can affect precise user preference learning.
A popular solution is based on dropping noisy samples in the model training phase.
We propose a Double Correction Framework for Denoising Recommendation.
arXiv Detail & Related papers (2024-05-18T12:15:10Z) - Label Denoising through Cross-Model Agreement [43.5145547124009]
Memorizing noisy labels could affect the learning of the model, leading to sub-optimal performances.
We propose a novel framework to learn robust machine-learning models from noisy labels.
arXiv Detail & Related papers (2023-08-27T00:31:04Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - CausPref: Causal Preference Learning for Out-of-Distribution
Recommendation [36.22965012642248]
The current recommender system is still vulnerable to the distribution shift of users and items in realistic scenarios.
We propose to incorporate the recommendation-specific DAG learner into a novel causal preference-based recommendation framework named CausPref.
Our approach surpasses the benchmark models significantly under types of out-of-distribution settings.
arXiv Detail & Related papers (2022-02-08T16:42:03Z) - Investigating the Role of Negatives in Contrastive Representation
Learning [59.30700308648194]
Noise contrastive learning is a popular technique for unsupervised representation learning.
We focus on disambiguating the role of one of these parameters: the number of negative examples.
We find that the results broadly agree with our theory, while our vision experiments are murkier with performance sometimes even being insensitive to the number of negatives.
arXiv Detail & Related papers (2021-06-18T06:44:16Z) - Rethinking InfoNCE: How Many Negative Samples Do You Need? [54.146208195806636]
We study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework.
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
arXiv Detail & Related papers (2021-05-27T08:38:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.