Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows
- URL: http://arxiv.org/abs/2109.02521v1
- Date: Mon, 6 Sep 2021 14:52:58 GMT
- Title: Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows
- Authors: Nico Reick, Felix Wiewel, Alexander Bartler and Bin Yang
- Abstract summary: Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
- Score: 74.85071867225533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One major drawback of state-of-the-art artificial intelligence is its lack of
explainability. One approach to solve the problem is taking causality into
account. Causal mechanisms can be described by structural causal models. In
this work, we propose a method for estimating bivariate structural causal
models using a combination of normalising flows applied to density estimation
and variational Gaussian process regression for post-nonlinear models. It
facilitates causal discovery, i.e. distinguishing cause and effect, by either
the independence of cause and residual or a likelihood ratio test. Our method
which estimates post-nonlinear models can better explain a variety of
real-world cause-effect pairs than a simple additive noise model. Though it
remains difficult to exploit this benefit regarding all pairs from the
T\"ubingen benchmark database, we demonstrate that combining the additive noise
model approach with our method significantly enhances causal discovery.
Related papers
- Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Enabling Causal Discovery in Post-Nonlinear Models with Normalizing Flows [6.954510776782872]
Post-nonlinear (PNL) causal models stand out as a versatile and adaptable framework for modeling causal relationships.
We introduce CAF-PoNo, harnessing the power of the normalizing flows architecture to enforce the crucial invertibility constraint in PNL models.
Our method precisely reconstructs the hidden noise, which plays a vital role in cause-effect identification.
arXiv Detail & Related papers (2024-07-06T07:19:21Z) - Robust Estimation of Causal Heteroscedastic Noise Models [7.568978862189266]
Student's $t$-distribution is known for its robustness in accounting for sampling variability with smaller sample sizes and extreme values without significantly altering the overall distribution shape.
Our empirical evaluations demonstrate that our estimators are more robust and achieve better overall performance across synthetic and real benchmarks.
arXiv Detail & Related papers (2023-12-15T02:26:35Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - Causal Autoregressive Flows [4.731404257629232]
We highlight an intrinsic correspondence between a simple family of autoregressive normalizing flows and identifiable causal models.
We exploit the fact that autoregressive flow architectures define an ordering over variables, analogous to a causal ordering, to show that they are well-suited to performing a range of causal inference tasks.
arXiv Detail & Related papers (2020-11-04T13:17:35Z) - Estimation of Structural Causal Model via Sparsely Mixing Independent
Component Analysis [4.7210697296108926]
We propose a new estimation method for a linear DAG model with non-Gaussian noises.
The proposed method enables us to estimate the causal order and the parameters simultaneously.
Numerical experiments show that the proposed method outperforms existing methods.
arXiv Detail & Related papers (2020-09-07T13:08:10Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - A Critical View of the Structural Causal Model [89.43277111586258]
We show that one can identify the cause and the effect without considering their interaction at all.
We propose a new adversarial training method that mimics the disentangled structure of the causal model.
Our multidimensional method outperforms the literature methods on both synthetic and real world datasets.
arXiv Detail & Related papers (2020-02-23T22:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.