Doubly Robust Collaborative Targeted Learning for Recommendation on Data
Missing Not at Random
- URL: http://arxiv.org/abs/2203.10258v1
- Date: Sat, 19 Mar 2022 06:48:50 GMT
- Title: Doubly Robust Collaborative Targeted Learning for Recommendation on Data
Missing Not at Random
- Authors: Peng Wu, Haoxuan Li, Yan Lyu, and Xiao-Hua Zhou
- Abstract summary: In recommender systems, the feedback data received is always missing not at random (MNAR)
We propose bf DR-TMLE that effectively captures the merits of both error imputation-based (EIB) and doubly robust (DR) methods.
We also propose a novel RCT-free collaborative targeted learning algorithm for DR-TMLE, called bf DR-TMLE-TL
- Score: 6.563595953273317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recommender systems, the feedback data received is always missing not at
random (MNAR), which poses challenges for accurate rating prediction. To
address this issue, many recent studies have been conducted on the doubly
robust (DR) method and its variants to reduce bias. However, theoretical
analysis shows that the DR method has a relatively large variance, while that
of the error imputation-based (EIB) method is smaller. In this paper, we
propose {\bf DR-TMLE} that effectively captures the merits of both EIB and DR,
by leveraging the targeted maximum likelihood estimation (TMLE) technique.
DR-TMLE first obtains an initial EIB estimator and then updates the error
imputation model along with the bias-reduced direction. Furthermore, we propose
a novel RCT-free collaborative targeted learning algorithm for DR-TMLE, called
{\bf DR-TMLE-TL}, which updates the propensity model adaptively to reduce the
bias of imputed errors. Both theoretical analysis and experiments demonstrate
the advantages of the proposed methods compared with existing debiasing
methods.
Related papers
- Debiased Recommendation with Noisy Feedback [41.38490962524047]
We study intersectional threats to the unbiased learning of the prediction model from data MNAR and OME in the collected data.
First, we design OME-EIB, OME-IPS, and OME-DR estimators, which largely extend the existing estimators to combat OME in real-world recommendation scenarios.
arXiv Detail & Related papers (2024-06-24T23:42:18Z) - Black-box Adversarial Attacks against Dense Retrieval Models: A
Multi-view Contrastive Learning Method [115.29382166356478]
We introduce the adversarial retrieval attack (AREA) task.
It is meant to trick DR models into retrieving a target document that is outside the initial set of candidate documents retrieved by the DR model.
We find that the promising results that have previously been reported on attacking NRMs, do not generalize to DR models.
We propose to formalize attacks on DR models as a contrastive learning problem in a multi-view representation space.
arXiv Detail & Related papers (2023-08-19T00:24:59Z) - How to Train Your DRAGON: Diverse Augmentation Towards Generalizable
Dense Retrieval [80.54532535622988]
We show that a generalizable dense retriever can be trained to achieve high accuracy in both supervised and zero-shot retrieval.
DRAGON, our dense retriever trained with diverse augmentation, is the first BERT-base-sized DR to achieve state-of-the-art effectiveness in both supervised and zero-shot evaluations.
arXiv Detail & Related papers (2023-02-15T03:53:26Z) - A Generalized Doubly Robust Learning Framework for Debiasing Post-Click
Conversion Rate Prediction [23.340584290411208]
Post-click conversion rate (CVR) prediction is an essential task for discovering user interests and increasing platform revenues.
Currently, doubly robust (DR) learning approaches achieve the state-of-the-art performance for debiasing CVR prediction.
We propose two new DR methods, namely DR-BIAS and DR-MSE, which control the bias of DR loss and balance the bias and variance flexibly.
arXiv Detail & Related papers (2022-11-12T15:09:23Z) - Multiple Robust Learning for Recommendation [13.06593469196849]
In recommender systems, a common problem is the presence of various biases in the collected data.
We propose a multiple robust (MR) estimator that can take the advantage of multiple candidate imputation and propensity models to achieve unbiasedness.
arXiv Detail & Related papers (2022-07-09T13:15:56Z) - StableDR: Stabilized Doubly Robust Learning for Recommendation on Data
Missing Not at Random [16.700598755439685]
We show that the doubly robust (DR) methods are unstable and have unbounded bias, variance, and generalization bounds to extremely small propensities.
We propose a doubly robust (StableDR) learning approach with a weaker reliance on extrapolation.
In addition, we propose a novel learning approach for StableDR that updates the imputation, propensity, and prediction models cyclically.
arXiv Detail & Related papers (2022-05-10T07:04:53Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Enhanced Doubly Robust Learning for Debiasing Post-click Conversion Rate
Estimation [29.27760413892272]
Post-click conversion, as a strong signal indicating the user preference, is salutary for building recommender systems.
Currently, most existing methods utilize counterfactual learning to debias recommender systems.
We propose a novel double learning approach for the MRDR estimator, which can convert the error imputation into the general CVR estimation.
arXiv Detail & Related papers (2021-05-28T06:59:49Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.