CUTS+: High-dimensional Causal Discovery from Irregular Time-series
- URL: http://arxiv.org/abs/2305.05890v2
- Date: Wed, 16 Aug 2023 04:15:14 GMT
- Title: CUTS+: High-dimensional Causal Discovery from Irregular Time-series
- Authors: Yuxiao Cheng, Lianglong Li, Tingxiong Xiao, Zongren Li, Qin Zhong,
Jinli Suo, Kunlun He
- Abstract summary: We propose CUTS+, which is built on the Granger-causality-based causal discovery method CUTS.
We show that CUTS+ largely improves the causal discovery performance on high-dimensional data with different types of irregular sampling.
- Score: 13.84185941100574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal discovery in time-series is a fundamental problem in the machine
learning community, enabling causal reasoning and decision-making in complex
scenarios. Recently, researchers successfully discover causality by combining
neural networks with Granger causality, but their performances degrade largely
when encountering high-dimensional data because of the highly redundant network
design and huge causal graphs. Moreover, the missing entries in the
observations further hamper the causal structural learning. To overcome these
limitations, We propose CUTS+, which is built on the Granger-causality-based
causal discovery method CUTS and raises the scalability by introducing a
technique called Coarse-to-fine-discovery (C2FD) and leveraging a
message-passing-based graph neural network (MPGNN). Compared to previous
methods on simulated, quasi-real, and real datasets, we show that CUTS+ largely
improves the causal discovery performance on high-dimensional data with
different types of irregular sampling.
Related papers
- Causal Temporal Graph Convolutional Neural Networks (CTGCN) [0.44040106718326594]
We propose a Causal Temporal Graph Convolutional Neural Network (CTGCN)
Our architecture is based on a causal discovery mechanism, and is capable of discovering the underlying causal processes.
We show that the integration of causality into the TGCN architecture improves prediction performance up to 40% over typical TGCN approach.
arXiv Detail & Related papers (2023-03-16T20:28:36Z) - CUTS: Neural Causal Discovery from Irregular Time-Series Data [27.06531262632836]
Causal discovery from time-series data has been a central task in machine learning.
We present CUTS, a neural Granger causal discovery algorithm to jointly impute unobserved data points and build causal graphs.
Our approach constitutes a promising step towards applying causal discovery to real applications with non-ideal observations.
arXiv Detail & Related papers (2023-02-15T04:16:34Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Cause-Effect Preservation and Classification using Neurochaos Learning [0.0]
A recently proposed brain inspired learning algorithm namely-emphNeurochaos Learning (NL) is used for the classification of cause-effect from simulated data.
The data instances used are generated from coupled AR processes, coupled 1D chaotic skew tent maps, coupled 1D chaotic logistic maps and a real-world prey-predator system.
arXiv Detail & Related papers (2022-01-28T15:26:35Z) - Causal Discovery from Sparse Time-Series Data Using Echo State Network [0.0]
Causal discovery between collections of time-series data can help diagnose causes of symptoms and hopefully prevent faults before they occur.
We propose a new system comprised of two parts, the first part fills missing data with a Gaussian Process Regression, and the second part leverages an Echo State Network.
We report on their corresponding Matthews Correlation Coefficient(MCC) and Receiver Operating Characteristic curves (ROC) and show that the proposed system outperforms existing algorithms.
arXiv Detail & Related papers (2022-01-09T05:55:47Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Consistency of mechanistic causal discovery in continuous-time using
Neural ODEs [85.7910042199734]
We consider causal discovery in continuous-time for the study of dynamical systems.
We propose a causal discovery algorithm based on penalized Neural ODEs.
arXiv Detail & Related papers (2021-05-06T08:48:02Z) - CASTLE: Regularization via Auxiliary Causal Graph Discovery [89.74800176981842]
We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables.
CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features.
arXiv Detail & Related papers (2020-09-28T09:49:38Z) - Causal Discovery from Incomplete Data: A Deep Learning Approach [21.289342482087267]
Imputated Causal Learning is proposed to perform iterative missing data imputation and causal structure discovery.
We show that ICL can outperform state-of-the-art methods under different missing data mechanisms.
arXiv Detail & Related papers (2020-01-15T14:28:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.