A Causal Ordering Prior for Unsupervised Representation Learning
- URL: http://arxiv.org/abs/2307.05704v1
- Date: Tue, 11 Jul 2023 18:12:05 GMT
- Title: A Causal Ordering Prior for Unsupervised Representation Learning
- Authors: Avinash Kori, Pedro Sanchez, Konstantinos Vilouras, Ben Glocker,
Sotirios A. Tsaftaris
- Abstract summary: Causal representation learning argues that factors of variation in a dataset are, in fact, causally related.
We propose a fully unsupervised representation learning method that considers a data generation process with a latent additive noise model.
- Score: 27.18951912984905
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Unsupervised representation learning with variational inference relies
heavily on independence assumptions over latent variables. Causal
representation learning (CRL), however, argues that factors of variation in a
dataset are, in fact, causally related. Allowing latent variables to be
correlated, as a consequence of causal relationships, is more realistic and
generalisable. So far, provably identifiable methods rely on: auxiliary
information, weak labels, and interventional or even counterfactual data.
Inspired by causal discovery with functional causal models, we propose a fully
unsupervised representation learning method that considers a data generation
process with a latent additive noise model (ANM). We encourage the latent space
to follow a causal ordering via loss function based on the Hessian of the
latent distribution.
Related papers
- Identifiability Guarantees for Causal Disentanglement from Purely Observational Data [10.482728002416348]
Causal disentanglement aims to learn about latent causal factors behind data.
Recent advances establish identifiability results assuming that interventions on (single) latent factors are available.
We provide a precise characterization of latent factors that can be identified in nonlinear causal models.
arXiv Detail & Related papers (2024-10-31T04:18:29Z) - A Sparsity Principle for Partially Observable Causal Representation Learning [28.25303444099773]
Causal representation learning aims at identifying high-level causal variables from perceptual data.
We focus on learning from unpaired observations from a dataset with an instance-dependent partial observability pattern.
We propose two methods for estimating the underlying causal variables by enforcing sparsity in the inferred representation.
arXiv Detail & Related papers (2024-03-13T08:40:49Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Towards Causal Representation Learning and Deconfounding from Indefinite
Data [17.793702165499298]
Non-statistical data (e.g., images, text, etc.) encounters significant conflicts in terms of properties and methods with traditional causal data.
We redefine causal data from two novel perspectives and then propose three data paradigms.
We implement the above designs as a dynamic variational inference model, tailored to learn causal representation from indefinite data.
arXiv Detail & Related papers (2023-05-04T08:20:37Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.