Multiple Robust Learning for Recommendation
- URL: http://arxiv.org/abs/2207.10796v1
- Date: Sat, 9 Jul 2022 13:15:56 GMT
- Title: Multiple Robust Learning for Recommendation
- Authors: Haoxuan Li, Quanyu Dai, Yuru Li, Yan Lyu, Zhenhua Dong, Peng Wu,
Xiao-Hua Zhou
- Abstract summary: In recommender systems, a common problem is the presence of various biases in the collected data.
We propose a multiple robust (MR) estimator that can take the advantage of multiple candidate imputation and propensity models to achieve unbiasedness.
- Score: 13.06593469196849
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recommender systems, a common problem is the presence of various biases in
the collected data, which deteriorates the generalization ability of the
recommendation models and leads to inaccurate predictions. Doubly robust (DR)
learning has been studied in many tasks in RS, with the advantage that unbiased
learning can be achieved when either a single imputation or a single propensity
model is accurate. In this paper, we propose a multiple robust (MR) estimator
that can take the advantage of multiple candidate imputation and propensity
models to achieve unbiasedness. Specifically, the MR estimator is unbiased when
any of the imputation or propensity models, or a linear combination of these
models is accurate. Theoretical analysis shows that the proposed MR is an
enhanced version of DR when only having a single imputation and propensity
model, and has a smaller bias. Inspired by the generalization error bound of
MR, we further propose a novel multiple robust learning approach with
stabilization. We conduct extensive experiments on real-world and
semi-synthetic datasets, which demonstrates the superiority of the proposed
approach over state-of-the-art methods.
Related papers
- Debiased Recommendation with Noisy Feedback [41.38490962524047]
We study intersectional threats to the unbiased learning of the prediction model from data MNAR and OME in the collected data.
First, we design OME-EIB, OME-IPS, and OME-DR estimators, which largely extend the existing estimators to combat OME in real-world recommendation scenarios.
arXiv Detail & Related papers (2024-06-24T23:42:18Z) - Addressing Bias Through Ensemble Learning and Regularized Fine-Tuning [0.2812395851874055]
This paper proposes a comprehensive approach using multiple methods to remove bias in AI models.
We train multiple models with the counter-bias of the pre-trained model through data splitting, local training, and regularized fine-tuning.
We conclude our solution with knowledge distillation that results in a single unbiased neural network.
arXiv Detail & Related papers (2024-02-01T09:24:36Z) - Curriculum-scheduled Knowledge Distillation from Multiple Pre-trained Teachers for Multi-domain Sequential Recommendation [102.91236882045021]
It is essential to explore how to use different pre-trained recommendation models efficiently in real-world systems.
We propose a novel curriculum-scheduled knowledge distillation from multiple pre-trained teachers for multi-domain sequential recommendation.
CKD-MDSR takes full advantages of different PRMs as multiple teacher models to boost a small student recommendation model.
arXiv Detail & Related papers (2024-01-01T15:57:15Z) - Debiasing Multimodal Models via Causal Information Minimization [65.23982806840182]
We study bias arising from confounders in a causal graph for multimodal data.
Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data.
We use these features as confounder representations and use them via methods motivated by causal theory to remove bias from models.
arXiv Detail & Related papers (2023-11-28T16:46:14Z) - A Generalized Doubly Robust Learning Framework for Debiasing Post-Click
Conversion Rate Prediction [23.340584290411208]
Post-click conversion rate (CVR) prediction is an essential task for discovering user interests and increasing platform revenues.
Currently, doubly robust (DR) learning approaches achieve the state-of-the-art performance for debiasing CVR prediction.
We propose two new DR methods, namely DR-BIAS and DR-MSE, which control the bias of DR loss and balance the bias and variance flexibly.
arXiv Detail & Related papers (2022-11-12T15:09:23Z) - MRCLens: an MRC Dataset Bias Detection Toolkit [82.44296974850639]
We introduce MRCLens, a toolkit that detects whether biases exist before users train the full model.
For the convenience of introducing the toolkit, we also provide a categorization of common biases in MRC.
arXiv Detail & Related papers (2022-07-18T21:05:39Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Doubly Robust Collaborative Targeted Learning for Recommendation on Data
Missing Not at Random [6.563595953273317]
In recommender systems, the feedback data received is always missing not at random (MNAR)
We propose bf DR-TMLE that effectively captures the merits of both error imputation-based (EIB) and doubly robust (DR) methods.
We also propose a novel RCT-free collaborative targeted learning algorithm for DR-TMLE, called bf DR-TMLE-TL
arXiv Detail & Related papers (2022-03-19T06:48:50Z) - Enhanced Doubly Robust Learning for Debiasing Post-click Conversion Rate
Estimation [29.27760413892272]
Post-click conversion, as a strong signal indicating the user preference, is salutary for building recommender systems.
Currently, most existing methods utilize counterfactual learning to debias recommender systems.
We propose a novel double learning approach for the MRDR estimator, which can convert the error imputation into the general CVR estimation.
arXiv Detail & Related papers (2021-05-28T06:59:49Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.