Do learned representations respect causal relationships?
- URL: http://arxiv.org/abs/2204.00762v1
- Date: Sat, 2 Apr 2022 04:53:10 GMT
- Title: Do learned representations respect causal relationships?
- Authors: Lan Wang and Vishnu Naresh Boddeti
- Abstract summary: We introduce NCINet, an approach for observational causal discovery from high-dimensional data.
Second, we apply NCINet to identify the causal relations between image representations of different pairs of attributes with known and unknown causal relations between the labels.
Third, we analyze the effect on the underlying causal relation between learned representations induced by various design choices in representation learning.
- Score: 30.36097461828338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data often has many semantic attributes that are causally associated with
each other. But do attribute-specific learned representations of data also
respect the same causal relations? We answer this question in three steps.
First, we introduce NCINet, an approach for observational causal discovery from
high-dimensional data. It is trained purely on synthetically generated
representations and can be applied to real representations, and is specifically
designed to mitigate the domain gap between the two. Second, we apply NCINet to
identify the causal relations between image representations of different pairs
of attributes with known and unknown causal relations between the labels. For
this purpose, we consider image representations learned for predicting
attributes on the 3D Shapes, CelebA, and the CASIA-WebFace datasets, which we
annotate with multiple multi-class attributes. Third, we analyze the effect on
the underlying causal relation between learned representations induced by
various design choices in representation learning. Our experiments indicate
that (1) NCINet significantly outperforms existing observational causal
discovery approaches for estimating the causal relation between pairs of random
samples, both in the presence and absence of an unobserved confounder, (2)
under controlled scenarios, learned representations can indeed satisfy the
underlying causal relations between their respective labels, and (3) the causal
relations are positively correlated with the predictive capability of the
representations.
Related papers
- Look, Learn and Leverage (L$^3$): Mitigating Visual-Domain Shift and Discovering Intrinsic Relations via Symbolic Alignment [19.700374722227107]
We propose a novel learning framework, Look, Learn and Leverage (L$3$), which decomposes the learning process into three distinct phases.
A relations discovery model can be trained on the source domain, and when the visual domain shifts and the intrinsic relations are absent, the pretrained relations discovery model can be directly reused and maintain a satisfactory performance.
arXiv Detail & Related papers (2024-08-30T15:53:48Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Towards Causal Representation Learning and Deconfounding from Indefinite
Data [17.793702165499298]
Non-statistical data (e.g., images, text, etc.) encounters significant conflicts in terms of properties and methods with traditional causal data.
We redefine causal data from two novel perspectives and then propose three data paradigms.
We implement the above designs as a dynamic variational inference model, tailored to learn causal representation from indefinite data.
arXiv Detail & Related papers (2023-05-04T08:20:37Z) - DOMINO: Visual Causal Reasoning with Time-Dependent Phenomena [59.291745595756346]
We propose a set of visual analytics methods that allow humans to participate in the discovery of causal relations associated with windows of time delay.
Specifically, we leverage a well-established method, logic-based causality, to enable analysts to test the significance of potential causes.
Since an effect can be a cause of other effects, we allow users to aggregate different temporal cause-effect relations found with our method into a visual flow diagram.
arXiv Detail & Related papers (2023-03-12T03:40:21Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Causal Transportability for Visual Recognition [70.13627281087325]
We show that standard classifiers fail because the association between images and labels is not transportable across settings.
We then show that the causal effect, which severs all sources of confounding, remains invariant across domains.
This motivates us to develop an algorithm to estimate the causal effect for image classification.
arXiv Detail & Related papers (2022-04-26T15:02:11Z) - Learning latent causal relationships in multiple time series [0.0]
In many systems, the causal relations are embedded in a latent space that is expressed in the observed data as a linear mixture.
A technique for blindly identifying the latent sources is presented.
The proposed technique is unsupervised and can be readily applied to any multiple time series to shed light on the causal relationships underlying the data.
arXiv Detail & Related papers (2022-03-21T00:20:06Z) - Generalizable Information Theoretic Causal Representation [37.54158138447033]
We propose to learn causal representation from observational data by regularizing the learning procedure with mutual information measures according to our hypothetical causal graph.
The optimization involves a counterfactual loss, based on which we deduce a theoretical guarantee that the causality-inspired learning is with reduced sample complexity and better generalization ability.
arXiv Detail & Related papers (2022-02-17T00:38:35Z) - Fuzzy Stochastic Timed Petri Nets for Causal properties representation [68.8204255655161]
Causal relations are frequently represented by directed graphs, with nodes denoting causes and links denoting causal influence.
Common methods used for graphically representing causal scenarios are neurons, truth tables, causal Bayesian networks, cognitive maps and Petri Nets.
We will show that, even though the traditional models are able to represent separately some of the properties aforementioned, they fail trying to illustrate indistinctly all of them.
arXiv Detail & Related papers (2020-11-24T13:22:34Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.