Functional Linear Non-Gaussian Acyclic Model for Causal Discovery
- URL: http://arxiv.org/abs/2401.09641v1
- Date: Wed, 17 Jan 2024 23:27:48 GMT
- Title: Functional Linear Non-Gaussian Acyclic Model for Causal Discovery
- Authors: Tian-Le Yang, Kuang-Yao Lee, Kun Zhang, Joe Suzuki
- Abstract summary: We develop a framework to identify causal relationships in brain-effective connectivity tasks involving fMRI and EEG datasets.
We establish theoretical guarantees of the identifiability of the causal relationship among non-Gaussian random vectors and even random functions in infinite-dimensional Hilbert spaces.
For real data, we focus on analyzing the brain connectivity patterns derived from fMRI data.
- Score: 7.303542369216906
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In causal discovery, non-Gaussianity has been used to characterize the
complete configuration of a Linear Non-Gaussian Acyclic Model (LiNGAM),
encompassing both the causal ordering of variables and their respective
connection strengths. However, LiNGAM can only deal with the finite-dimensional
case. To expand this concept, we extend the notion of variables to encompass
vectors and even functions, leading to the Functional Linear Non-Gaussian
Acyclic Model (Func-LiNGAM). Our motivation stems from the desire to identify
causal relationships in brain-effective connectivity tasks involving, for
example, fMRI and EEG datasets. We demonstrate why the original LiNGAM fails to
handle these inherently infinite-dimensional datasets and explain the
availability of functional data analysis from both empirical and theoretical
perspectives. {We establish theoretical guarantees of the identifiability of
the causal relationship among non-Gaussian random vectors and even random
functions in infinite-dimensional Hilbert spaces.} To address the issue of
sparsity in discrete time points within intrinsic infinite-dimensional
functional data, we propose optimizing the coordinates of the vectors using
functional principal component analysis. Experimental results on synthetic data
verify the ability of the proposed framework to identify causal relationships
among multivariate functions using the observed samples. For real data, we
focus on analyzing the brain connectivity patterns derived from fMRI data.
Related papers
- Parameter identification in linear non-Gaussian causal models under general confounding [8.273471398838533]
We study identification of the linear coefficients when such models contain latent variables.
Our main result is a graphical criterion that is necessary and sufficient for deciding generic identifiability of direct causal effects.
We report on estimations based on the identification result, explore a generalization to models with feedback loops, and provide new results on the identifiability of the causal graph.
arXiv Detail & Related papers (2024-05-31T14:39:14Z) - Directed Cyclic Graph for Causal Discovery from Multivariate Functional
Data [15.26007975367927]
We introduce a functional linear structural equation model for causal structure learning.
To enhance interpretability, our model involves a low-dimensional causal embedded space.
We prove that the proposed model is causally identifiable under standard assumptions.
arXiv Detail & Related papers (2023-10-31T15:19:24Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Generative Adversarial Neural Operators [59.21759531471597]
We propose the generative adversarial neural operator (GANO), a generative model paradigm for learning probabilities on infinite-dimensional function spaces.
GANO consists of two main components, a generator neural operator and a discriminator neural functional.
We empirically study GANOs in controlled cases where both input and output functions are samples from GRFs and compare its performance to the finite-dimensional counterpart GAN.
arXiv Detail & Related papers (2022-05-06T05:12:22Z) - Functional Mixtures-of-Experts [0.24578723416255746]
We consider the statistical analysis of heterogeneous data for prediction in situations where the observations include functions.
We first present a new family of ME models, named functional ME (FME) in which the predictors are potentially noisy observations.
We develop dedicated expectation--maximization algorithms for Lasso-like (EM-Lasso) regularized maximum-likelihood parameter estimation strategies to fit the models.
arXiv Detail & Related papers (2022-02-04T17:32:28Z) - BCDAG: An R package for Bayesian structure and Causal learning of
Gaussian DAGs [77.34726150561087]
We introduce the R package for causal discovery and causal effect estimation from observational data.
Our implementation scales efficiently with the number of observations and, whenever the DAGs are sufficiently sparse, the number of variables in the dataset.
We then illustrate the main functions and algorithms on both real and simulated datasets.
arXiv Detail & Related papers (2022-01-28T09:30:32Z) - Partial Counterfactual Identification from Observational and
Experimental Data [83.798237968683]
We develop effective Monte Carlo algorithms to approximate the optimal bounds from an arbitrary combination of observational and experimental data.
Our algorithms are validated extensively on synthetic and real-world datasets.
arXiv Detail & Related papers (2021-10-12T02:21:30Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Learning Inconsistent Preferences with Gaussian Processes [14.64963271587818]
We revisit widely used preferential Gaussian processes by Chu et al.(2005) and challenge their modelling assumption that imposes rankability of data items via latent utility function values.
We propose a generalisation of pgp which can capture more expressive latent preferential structures in the data.
Our experimental findings support the conjecture that violations of rankability are ubiquitous in real-world preferential data.
arXiv Detail & Related papers (2020-06-06T11:57:45Z) - Generalisation error in learning with random features and the hidden
manifold model [23.71637173968353]
We study generalised linear regression and classification for a synthetically generated dataset.
We consider the high-dimensional regime and using the replica method from statistical physics.
We show how to obtain the so-called double descent behaviour for logistic regression with a peak at the threshold.
We discuss the role played by correlations in the data generated by the hidden manifold model.
arXiv Detail & Related papers (2020-02-21T14:49:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.