CausPref: Causal Preference Learning for Out-of-Distribution
Recommendation
- URL: http://arxiv.org/abs/2202.03984v2
- Date: Wed, 9 Feb 2022 04:06:02 GMT
- Title: CausPref: Causal Preference Learning for Out-of-Distribution
Recommendation
- Authors: Yue He, Zimu Wang, Peng Cui, Hao Zou, Yafeng Zhang, Qiang Cui, Yong
Jiang
- Abstract summary: The current recommender system is still vulnerable to the distribution shift of users and items in realistic scenarios.
We propose to incorporate the recommendation-specific DAG learner into a novel causal preference-based recommendation framework named CausPref.
Our approach surpasses the benchmark models significantly under types of out-of-distribution settings.
- Score: 36.22965012642248
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In spite of the tremendous development of recommender system owing to the
progressive capability of machine learning recently, the current recommender
system is still vulnerable to the distribution shift of users and items in
realistic scenarios, leading to the sharp decline of performance in testing
environments. It is even more severe in many common applications where only the
implicit feedback from sparse data is available. Hence, it is crucial to
promote the performance stability of recommendation method in different
environments. In this work, we first make a thorough analysis of implicit
recommendation problem from the viewpoint of out-of-distribution (OOD)
generalization. Then under the guidance of our theoretical analysis, we propose
to incorporate the recommendation-specific DAG learner into a novel causal
preference-based recommendation framework named CausPref, mainly consisting of
causal learning of invariant user preference and anti-preference negative
sampling to deal with implicit feedback. Extensive experimental results from
real-world datasets clearly demonstrate that our approach surpasses the
benchmark models significantly under types of out-of-distribution settings, and
show its impressive interpretability.
Related papers
- CSRec: Rethinking Sequential Recommendation from A Causal Perspective [25.69446083970207]
The essence of sequential recommender systems (RecSys) lies in understanding how users make decisions.
We propose a novel formulation of sequential recommendation, termed Causal Sequential Recommendation (CSRec)
CSRec aims to predict the probability of a recommended item's acceptance within a sequential context and backtrack how current decisions are made.
arXiv Detail & Related papers (2024-08-23T23:19:14Z) - Uncertainty-Aware Instance Reweighting for Off-Policy Learning [63.31923483172859]
We propose a Uncertainty-aware Inverse Propensity Score estimator (UIPS) for improved off-policy learning.
Experiment results on synthetic and three real-world recommendation datasets demonstrate the advantageous sample efficiency of the proposed UIPS estimator.
arXiv Detail & Related papers (2023-03-11T11:42:26Z) - Off-policy evaluation for learning-to-rank via interpolating the
item-position model and the position-based model [83.83064559894989]
A critical need for industrial recommender systems is the ability to evaluate recommendation policies offline, before deploying them to production.
We develop a new estimator that mitigates the problems of the two most popular off-policy estimators for rankings.
In particular, the new estimator, called INTERPOL, addresses the bias of a potentially misspecified position-based model.
arXiv Detail & Related papers (2022-10-15T17:22:30Z) - Debiased Recommendation with Neural Stratification [19.841871819722016]
We propose to cluster the users for computing more accurate IPS via increasing the exposure densities.
We conduct extensive experiments based on real-world datasets to demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-08-15T15:45:35Z) - Deep Causal Reasoning for Recommendations [47.83224399498504]
A new trend in recommender system research is to negate the influence of confounders from a causal perspective.
We model the recommendation as a multi-cause multi-outcome (MCMO) inference problem.
We show that MCMO modeling may lead to high variance due to scarce observations associated with the high-dimensional causal space.
arXiv Detail & Related papers (2022-01-06T15:00:01Z) - Top-N Recommendation with Counterfactual User Preference Simulation [26.597102553608348]
Top-N recommendation, which aims to learn user ranking-based preference, has long been a fundamental problem in a wide range of applications.
In this paper, we propose to reformulate the recommendation task within the causal inference framework to handle the data scarce problem.
arXiv Detail & Related papers (2021-09-02T14:28:46Z) - Probabilistic and Variational Recommendation Denoising [56.879165033014026]
Learning from implicit feedback is one of the most common cases in the application of recommender systems.
We propose probabilistic and variational recommendation denoising for implicit feedback.
We employ the proposed DPI and DVAE on four state-of-the-art recommendation models and conduct experiments on three datasets.
arXiv Detail & Related papers (2021-05-20T08:59:44Z) - Latent Unexpected Recommendations [89.2011481379093]
We propose to model unexpectedness in the latent space of user and item embeddings, which allows to capture hidden and complex relations between new recommendations and historic purchases.
In addition, we develop a novel Latent Closure (LC) method to construct hybrid utility function and provide unexpected recommendations based on the proposed model.
arXiv Detail & Related papers (2020-07-27T02:39:30Z) - Convolutional Gaussian Embeddings for Personalized Recommendation with
Uncertainty [17.258674767363345]
Most existing embedding based recommendation models use embeddings corresponding to a single fixed point in low-dimensional space.
We propose a unified deep recommendation framework employing Gaussian embeddings, which are proven adaptive to uncertain preferences.
Our framework adopts Monte-Carlo sampling and convolutional neural networks to compute the correlation between the objective user and the candidate item.
arXiv Detail & Related papers (2020-06-19T02:10:38Z) - Learning the Truth From Only One Side of the Story [58.65439277460011]
We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution.
We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically.
arXiv Detail & Related papers (2020-06-08T18:20:28Z) - Reward Constrained Interactive Recommendation with Natural Language
Feedback [158.8095688415973]
We propose a novel constraint-augmented reinforcement learning (RL) framework to efficiently incorporate user preferences over time.
Specifically, we leverage a discriminator to detect recommendations violating user historical preference.
Our proposed framework is general and is further extended to the task of constrained text generation.
arXiv Detail & Related papers (2020-05-04T16:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.