Causal Disentangled Variational Auto-Encoder for Preference
Understanding in Recommendation
- URL: http://arxiv.org/abs/2304.07922v1
- Date: Mon, 17 Apr 2023 00:10:56 GMT
- Title: Causal Disentangled Variational Auto-Encoder for Preference
Understanding in Recommendation
- Authors: Siyu Wang and Xiaocong Chen and Quan Z. Sheng and Yihong Zhang and
Lina Yao
- Abstract summary: This paper introduces the Causal Disentangled Variational Auto-Encoder (CaD-VAE), a novel approach for learning causal disentangled representations from interaction data in recommender systems.
The approach utilizes structural causal models to generate causal representations that describe the causal relationship between latent factors.
- Score: 50.93536377097659
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommendation models are typically trained on observational user interaction
data, but the interactions between latent factors in users' decision-making
processes lead to complex and entangled data. Disentangling these latent
factors to uncover their underlying representation can improve the robustness,
interpretability, and controllability of recommendation models. This paper
introduces the Causal Disentangled Variational Auto-Encoder (CaD-VAE), a novel
approach for learning causal disentangled representations from interaction data
in recommender systems. The CaD-VAE method considers the causal relationships
between semantically related factors in real-world recommendation scenarios,
rather than enforcing independence as in existing disentanglement methods. The
approach utilizes structural causal models to generate causal representations
that describe the causal relationship between latent factors. The results
demonstrate that CaD-VAE outperforms existing methods, offering a promising
solution for disentangling complex user behavior data in recommendation
systems.
Related papers
- Revisiting Spurious Correlation in Domain Generalization [12.745076668687748]
We build a structural causal model (SCM) to describe the causality within data generation process.
We further conduct a thorough analysis of the mechanisms underlying spurious correlation.
In this regard, we propose to control confounding bias in OOD generalization by introducing a propensity score weighted estimator.
arXiv Detail & Related papers (2024-06-17T13:22:00Z) - Causal Flow-based Variational Auto-Encoder for Disentangled Causal Representation Learning [1.4875602190483512]
Disentangled representation learning aims to learn low-dimensional representations of data, where each dimension corresponds to an underlying generative factor.
We design a new VAE-based framework named Disentangled Causal Variational Auto-Encoder (DCVAE)
DCVAE includes a variant of autoregressive flows known as causal flows, capable of learning effective causal disentangled representations.
arXiv Detail & Related papers (2023-04-18T14:26:02Z) - Less is More: Mitigate Spurious Correlations for Open-Domain Dialogue
Response Generation Models by Causal Discovery [52.95935278819512]
We conduct the first study on spurious correlations for open-domain response generation models based on a corpus CGDIALOG curated in our work.
Inspired by causal discovery algorithms, we propose a novel model-agnostic method for training and inference of response generation model.
arXiv Detail & Related papers (2023-03-02T06:33:48Z) - Deep Causal Reasoning for Recommendations [47.83224399498504]
A new trend in recommender system research is to negate the influence of confounders from a causal perspective.
We model the recommendation as a multi-cause multi-outcome (MCMO) inference problem.
We show that MCMO modeling may lead to high variance due to scarce observations associated with the high-dimensional causal space.
arXiv Detail & Related papers (2022-01-06T15:00:01Z) - Explainable Recommendation Systems by Generalized Additive Models with
Manifest and Latent Interactions [3.022014732234611]
We propose the explainable recommendation systems based on a generalized additive model with manifest and latent interactions.
A new Python package GAMMLI is developed for efficient model training and visualized interpretation of the results.
arXiv Detail & Related papers (2020-12-15T10:29:12Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z) - Estimating the Effects of Continuous-valued Interventions using
Generative Adversarial Networks [103.14809802212535]
We build on the generative adversarial networks (GANs) framework to address the problem of estimating the effect of continuous-valued interventions.
Our model, SCIGAN, is flexible and capable of simultaneously estimating counterfactual outcomes for several different continuous interventions.
To address the challenges presented by shifting to continuous interventions, we propose a novel architecture for our discriminator.
arXiv Detail & Related papers (2020-02-27T18:46:21Z) - Resolving Spurious Correlations in Causal Models of Environments via
Interventions [2.836066255205732]
We consider the problem of inferring a causal model of a reinforcement learning environment.
Our method designs a reward function that incentivizes an agent to do an intervention to find errors in the causal model.
The experimental results in a grid-world environment show that our approach leads to better causal models compared to baselines.
arXiv Detail & Related papers (2020-02-12T20:20:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.