The Effect of Noise Level on Causal Identification with Additive Noise
Models
- URL: http://arxiv.org/abs/2108.11320v1
- Date: Tue, 24 Aug 2021 11:18:41 GMT
- Title: The Effect of Noise Level on Causal Identification with Additive Noise
Models
- Authors: Benjamin Kap
- Abstract summary: We consider the impact of different noise levels on the ability of Additive Noise Models to identify the direction of the causal relationship.
Two specific methods have been selected, textitRegression with Subsequent Independence Test and textitIdentification using Conditional Variances
The results of the experiments show that these methods can fail to capture the true causal direction for some levels of noise.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years a lot of research has been conducted within the area of
causal inference and causal learning. Many methods have been developed to
identify the cause-effect pairs in models and have been successfully applied to
observational real-world data in order to determine the direction of causal
relationships. Many of these methods require simplifying assumptions, such as
absence of confounding, cycles, and selection bias. Yet in bivariate situations
causal discovery problems remain challenging. One class of such methods, that
also allows tackling the bivariate case, is based on Additive Noise Models
(ANMs). Unfortunately, one aspect of these methods has not received much
attention until now: what is the impact of different noise levels on the
ability of these methods to identify the direction of the causal relationship.
This work aims to bridge this gap with the help of an empirical study. For this
work, we considered bivariate cases, which is the most elementary form of a
causal discovery problem where one needs to decide whether X causes Y or Y
causes X, given joint distributions of two variables X, Y. Furthermore, two
specific methods have been selected, \textit{Regression with Subsequent
Independence Test} and \textit{Identification using Conditional Variances},
which have been tested with an exhaustive range of ANMs where the additive
noises' levels gradually change from 1% to 10000% of the causes' noise level
(the latter remains fixed). Additionally, the experiments in this work consider
several different types of distributions as well as linear and non-linear ANMs.
The results of the experiments show that these methods can fail to capture the
true causal direction for some levels of noise.
Related papers
- Unsupervised Pairwise Causal Discovery on Heterogeneous Data using Mutual Information Measures [49.1574468325115]
Causal Discovery is a technique that tackles the challenge by analyzing the statistical properties of the constituent variables.
We question the current (possibly misleading) baseline results on the basis that they were obtained through supervised learning.
In consequence, we approach this problem in an unsupervised way, using robust Mutual Information measures.
arXiv Detail & Related papers (2024-08-01T09:11:08Z) - A Sparsity Principle for Partially Observable Causal Representation Learning [28.25303444099773]
Causal representation learning aims at identifying high-level causal variables from perceptual data.
We focus on learning from unpaired observations from a dataset with an instance-dependent partial observability pattern.
We propose two methods for estimating the underlying causal variables by enforcing sparsity in the inferred representation.
arXiv Detail & Related papers (2024-03-13T08:40:49Z) - Identification of Causal Structure with Latent Variables Based on Higher
Order Cumulants [31.85295338809117]
We propose a novel approach to identify the existence of a causal edge between two observed variables subject to latent variable influence.
In case when such a causal edge exits, we introduce an asymmetry criterion to determine the causal direction.
arXiv Detail & Related papers (2023-12-19T08:20:19Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Cause-Effect Inference in Location-Scale Noise Models: Maximum
Likelihood vs. Independence Testing [19.23479356810746]
A fundamental problem of causal discovery is cause-effect inference, learning the correct causal direction between two random variables.
Recently introduced heteroscedastic location-scale noise functional models (LSNMs) combine expressive power with identifiability guarantees.
We show that LSNM model selection based on maximizing likelihood achieves state-of-the-art accuracy, when the noise distributions are correctly specified.
arXiv Detail & Related papers (2023-01-26T20:48:32Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Causal Identification with Additive Noise Models: Quantifying the Effect
of Noise [5.037636944933989]
This work investigates the impact of different noise levels on the ability of Additive Noise Models to identify the direction of the causal relationship.
We use an exhaustive range of models where the level of additive noise gradually changes from 1% to 10000% of the causes' noise level.
The results of the experiments show that ANMs methods can fail to capture the true causal direction for some levels of noise.
arXiv Detail & Related papers (2021-10-15T13:28:33Z) - Variance Minimization in the Wasserstein Space for Invariant Causal
Prediction [72.13445677280792]
In this work, we show that the approach taken in ICP may be reformulated as a series of nonparametric tests that scales linearly in the number of predictors.
Each of these tests relies on the minimization of a novel loss function that is derived from tools in optimal transport theory.
We prove under mild assumptions that our method is able to recover the set of identifiable direct causes, and we demonstrate in our experiments that it is competitive with other benchmark causal discovery algorithms.
arXiv Detail & Related papers (2021-10-13T22:30:47Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Information-Theoretic Approximation to Causal Models [0.0]
We show that it is possible to solve the problem of inferring the causal direction and causal effect between two random variables from a finite sample.
We embed distributions that originate from samples of X and Y into a higher dimensional probability space.
We show that this information-theoretic approximation to causal models (IACM) can be done by solving a linear optimization problem.
arXiv Detail & Related papers (2020-07-29T18:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.