CUTS+: High-dimensional Causal Discovery from Irregular Time-series
- URL: http://arxiv.org/abs/2305.05890v2
- Date: Wed, 16 Aug 2023 04:15:14 GMT
- Title: CUTS+: High-dimensional Causal Discovery from Irregular Time-series
- Authors: Yuxiao Cheng, Lianglong Li, Tingxiong Xiao, Zongren Li, Qin Zhong,
Jinli Suo, Kunlun He
- Abstract summary: We propose CUTS+, which is built on the Granger-causality-based causal discovery method CUTS.
We show that CUTS+ largely improves the causal discovery performance on high-dimensional data with different types of irregular sampling.
- Score: 13.84185941100574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal discovery in time-series is a fundamental problem in the machine
learning community, enabling causal reasoning and decision-making in complex
scenarios. Recently, researchers successfully discover causality by combining
neural networks with Granger causality, but their performances degrade largely
when encountering high-dimensional data because of the highly redundant network
design and huge causal graphs. Moreover, the missing entries in the
observations further hamper the causal structural learning. To overcome these
limitations, We propose CUTS+, which is built on the Granger-causality-based
causal discovery method CUTS and raises the scalability by introducing a
technique called Coarse-to-fine-discovery (C2FD) and leveraging a
message-passing-based graph neural network (MPGNN). Compared to previous
methods on simulated, quasi-real, and real datasets, we show that CUTS+ largely
improves the causal discovery performance on high-dimensional data with
different types of irregular sampling.
Related papers
- On the Convergence of (Stochastic) Gradient Descent for Kolmogorov--Arnold Networks [56.78271181959529]
Kolmogorov--Arnold Networks (KANs) have gained significant attention in the deep learning community.
Empirical investigations demonstrate that KANs optimized via gradient descent (SGD) are capable of achieving near-zero training loss.
arXiv Detail & Related papers (2024-10-10T15:34:10Z) - Generalization of Graph Neural Networks is Robust to Model Mismatch [84.01980526069075]
Graph neural networks (GNNs) have demonstrated their effectiveness in various tasks supported by their generalization capabilities.
In this paper, we examine GNNs that operate on geometric graphs generated from manifold models.
Our analysis reveals the robustness of the GNN generalization in the presence of such model mismatch.
arXiv Detail & Related papers (2024-08-25T16:00:44Z) - Causal Temporal Graph Convolutional Neural Networks (CTGCN) [0.44040106718326594]
We propose a Causal Temporal Graph Convolutional Neural Network (CTGCN)
Our architecture is based on a causal discovery mechanism, and is capable of discovering the underlying causal processes.
We show that the integration of causality into the TGCN architecture improves prediction performance up to 40% over typical TGCN approach.
arXiv Detail & Related papers (2023-03-16T20:28:36Z) - CUTS: Neural Causal Discovery from Irregular Time-Series Data [27.06531262632836]
Causal discovery from time-series data has been a central task in machine learning.
We present CUTS, a neural Granger causal discovery algorithm to jointly impute unobserved data points and build causal graphs.
Our approach constitutes a promising step towards applying causal discovery to real applications with non-ideal observations.
arXiv Detail & Related papers (2023-02-15T04:16:34Z) - Cause-Effect Preservation and Classification using Neurochaos Learning [0.0]
A recently proposed brain inspired learning algorithm namely-emphNeurochaos Learning (NL) is used for the classification of cause-effect from simulated data.
The data instances used are generated from coupled AR processes, coupled 1D chaotic skew tent maps, coupled 1D chaotic logistic maps and a real-world prey-predator system.
arXiv Detail & Related papers (2022-01-28T15:26:35Z) - Causal Discovery from Sparse Time-Series Data Using Echo State Network [0.0]
Causal discovery between collections of time-series data can help diagnose causes of symptoms and hopefully prevent faults before they occur.
We propose a new system comprised of two parts, the first part fills missing data with a Gaussian Process Regression, and the second part leverages an Echo State Network.
We report on their corresponding Matthews Correlation Coefficient(MCC) and Receiver Operating Characteristic curves (ROC) and show that the proposed system outperforms existing algorithms.
arXiv Detail & Related papers (2022-01-09T05:55:47Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Consistency of mechanistic causal discovery in continuous-time using
Neural ODEs [85.7910042199734]
We consider causal discovery in continuous-time for the study of dynamical systems.
We propose a causal discovery algorithm based on penalized Neural ODEs.
arXiv Detail & Related papers (2021-05-06T08:48:02Z) - Causal Discovery from Incomplete Data: A Deep Learning Approach [21.289342482087267]
Imputated Causal Learning is proposed to perform iterative missing data imputation and causal structure discovery.
We show that ICL can outperform state-of-the-art methods under different missing data mechanisms.
arXiv Detail & Related papers (2020-01-15T14:28:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.