Kolmogorov-Arnold Networks for Time Series Granger Causality Inference
- URL: http://arxiv.org/abs/2501.08958v2
- Date: Wed, 05 Feb 2025 15:26:49 GMT
- Title: Kolmogorov-Arnold Networks for Time Series Granger Causality Inference
- Authors: Meiliang Liu, Yunfang Xu, Zijin Li, Zhengye Si, Xiaoxiao Yang, Xinyue Yang, Zhiwen Zhao,
- Abstract summary: We propose a novel architecture that extends the recently proposed Kolmogorov-Arnold Networks (KAN) to the domain of causal inference.
By extracting base weights from KAN layers, KANGCI effectively infers the Granger causality from time series.
We also propose an algorithm that automatically selects causal relationships with better inference performance from the original or time-reversed time series.
- Score: 1.3401966602181168
- License:
- Abstract: We propose the Granger causality inference Kolmogorov-Arnold Networks (KANGCI), a novel architecture that extends the recently proposed Kolmogorov-Arnold Networks (KAN) to the domain of causal inference. By extracting base weights from KAN layers and incorporating the sparsity-inducing penalty and ridge regularization, KANGCI effectively infers the Granger causality from time series. Additionally, we propose an algorithm based on time-reversed Granger causality that automatically selects causal relationships with better inference performance from the original or time-reversed time series or integrates the results to mitigate spurious connectivities. Comprehensive experiments conducted on Lorenz-96, Gene regulatory networks, fMRI BOLD signals, VAR, and real-world EEG datasets demonstrate that the proposed model achieves competitive performance to state-of-the-art methods in inferring Granger causality from nonlinear, high-dimensional, and limited-sample time series.
Related papers
- Chain-of-Retrieval Augmented Generation [72.06205327186069]
This paper introduces an approach for training o1-like RAG models that retrieve and reason over relevant information step by step before generating the final answer.
Our proposed method, CoRAG, allows the model to dynamically reformulate the query based on the evolving state.
arXiv Detail & Related papers (2025-01-24T09:12:52Z) - Granger Causality Detection with Kolmogorov-Arnold Networks [20.96356350801151]
This study contributes to the definition of neural Granger causality models.
We develop a framework called Granger Causality KAN (GC-KAN) along with a tailored training approach designed specifically for Granger causality detection.
Our findings show the potential of KANs to outperform Kolmogorov-Arnoldgressive networks (KANs) in discerning interpretable Granger causal relationships.
arXiv Detail & Related papers (2024-12-19T20:10:34Z) - Jacobian Regularizer-based Neural Granger Causality [45.902407376192656]
We propose a Jacobian Regularizer-based Neural Granger Causality (JRNGC) approach.
Our method eliminates the sparsity constraints of weights by leveraging an input-output Jacobian matrix regularizer.
Our proposed approach achieves competitive performance with the state-of-the-art methods for learning summary Granger causality and full-time Granger causality.
arXiv Detail & Related papers (2024-05-14T17:13:50Z) - Causal Temporal Regime Structure Learning [49.77103348208835]
We present CASTOR, a novel method that concurrently learns the Directed Acyclic Graph (DAG) for each regime.
We establish the identifiability of the regimes and DAGs within our framework.
Experiments show that CASTOR consistently outperforms existing causal discovery models.
arXiv Detail & Related papers (2023-11-02T17:26:49Z) - Less is More: Mitigate Spurious Correlations for Open-Domain Dialogue
Response Generation Models by Causal Discovery [52.95935278819512]
We conduct the first study on spurious correlations for open-domain response generation models based on a corpus CGDIALOG curated in our work.
Inspired by causal discovery algorithms, we propose a novel model-agnostic method for training and inference of response generation model.
arXiv Detail & Related papers (2023-03-02T06:33:48Z) - Gated Recurrent Neural Networks with Weighted Time-Delay Feedback [59.125047512495456]
We introduce a novel gated recurrent unit (GRU) with a weighted time-delay feedback mechanism.
We show that $tau$-GRU can converge faster and generalize better than state-of-the-art recurrent units and gated recurrent architectures.
arXiv Detail & Related papers (2022-12-01T02:26:34Z) - Deep Recurrent Modelling of Granger Causality with Latent Confounding [0.0]
We propose a deep learning-based approach to model non-linear Granger causality by directly accounting for latent confounders.
We demonstrate the model performance on non-linear time series for which the latent confounder influences the cause and effect with different time lags.
arXiv Detail & Related papers (2022-02-23T03:26:22Z) - Joint estimation of multiple Granger causal networks: Inference of
group-level brain connectivity [8.122270502556374]
This paper considers joint learning of multiple Granger graphical models to discover underlying differential Granger causality structures across multiple time series.
Our methods were also applied to available resting-state fMRI time series from the ADHD-200 data sets to learn the differences of causality mechanisms.
arXiv Detail & Related papers (2021-05-15T10:29:02Z) - Interpretable Models for Granger Causality Using Self-explaining Neural
Networks [4.56877715768796]
We propose a novel framework for inferring Granger causality under nonlinear dynamics based on an extension of self-explaining neural networks.
This framework is more interpretable than other neural-network-based techniques for inferring Granger causality.
arXiv Detail & Related papers (2021-01-19T12:59:00Z) - CASTLE: Regularization via Auxiliary Causal Graph Discovery [89.74800176981842]
We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables.
CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features.
arXiv Detail & Related papers (2020-09-28T09:49:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.