Jacobian Regularizer-based Neural Granger Causality
- URL: http://arxiv.org/abs/2405.08779v1
- Date: Tue, 14 May 2024 17:13:50 GMT
- Title: Jacobian Regularizer-based Neural Granger Causality
- Authors: Wanqi Zhou, Shuanghao Bai, Shujian Yu, Qibin Zhao, Badong Chen,
- Abstract summary: We propose a Jacobian Regularizer-based Neural Granger Causality (JRNGC) approach.
Our method eliminates the sparsity constraints of weights by leveraging an input-output Jacobian matrix regularizer.
Our proposed approach achieves competitive performance with the state-of-the-art methods for learning summary Granger causality and full-time Granger causality.
- Score: 45.902407376192656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the advancement of neural networks, diverse methods for neural Granger causality have emerged, which demonstrate proficiency in handling complex data, and nonlinear relationships. However, the existing framework of neural Granger causality has several limitations. It requires the construction of separate predictive models for each target variable, and the relationship depends on the sparsity on the weights of the first layer, resulting in challenges in effectively modeling complex relationships between variables as well as unsatisfied estimation accuracy of Granger causality. Moreover, most of them cannot grasp full-time Granger causality. To address these drawbacks, we propose a Jacobian Regularizer-based Neural Granger Causality (JRNGC) approach, a straightforward yet highly effective method for learning multivariate summary Granger causality and full-time Granger causality by constructing a single model for all target variables. Specifically, our method eliminates the sparsity constraints of weights by leveraging an input-output Jacobian matrix regularizer, which can be subsequently represented as the weighted causal matrix in the post-hoc analysis. Extensive experiments show that our proposed approach achieves competitive performance with the state-of-the-art methods for learning summary Granger causality and full-time Granger causality while maintaining lower model complexity and high scalability.
Related papers
- Learning Flexible Time-windowed Granger Causality Integrating Heterogeneous Interventional Time Series Data [21.697069894721448]
We present a theoretically-grounded method that infers Granger causal structure and identifies unknown targets by leveraging heterogeneous interventional time series data.
Our method outperforms several robust baseline methods in learning Granger causal structure from interventional time series data.
arXiv Detail & Related papers (2024-06-14T21:36:00Z) - Learning Granger Causality from Instance-wise Self-attentive Hawkes
Processes [24.956802640469554]
Instance-wise Self-Attentive Hawkes Processes (ISAHP) is a novel deep learning framework that can directly infer the Granger causality at the instance level.
ISAHP is capable of discovering complex instance-level causal structures that cannot be handled by classical models.
arXiv Detail & Related papers (2024-02-06T05:46:51Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Nonlinear Permuted Granger Causality [0.6526824510982799]
Granger causal inference is a contentious but widespread method used in fields ranging from economics to neuroscience.
To allow for out-of-sample comparison, a measure of functional connectivity is explicitly defined using permutations of the covariate set.
Performance of the permutation method is compared to penalized variable selection, naive replacement, and omission techniques via simulation.
arXiv Detail & Related papers (2023-08-11T16:44:16Z) - ER: Equivariance Regularizer for Knowledge Graph Completion [107.51609402963072]
We propose a new regularizer, namely, Equivariance Regularizer (ER)
ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities.
The experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-24T08:18:05Z) - Deep Recurrent Modelling of Granger Causality with Latent Confounding [0.0]
We propose a deep learning-based approach to model non-linear Granger causality by directly accounting for latent confounders.
We demonstrate the model performance on non-linear time series for which the latent confounder influences the cause and effect with different time lags.
arXiv Detail & Related papers (2022-02-23T03:26:22Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Inductive Granger Causal Modeling for Multivariate Time Series [49.29373497269468]
We propose an Inductive GRanger cAusal modeling (InGRA) framework for inductive Granger causality learning and common causal structure detection.
In particular, we train one global model for individuals with different Granger causal structures through a novel attention mechanism, called Granger causal attention.
The model can detect common causal structures for different individuals and infer Granger causal structures for newly arrived individuals.
arXiv Detail & Related papers (2021-02-10T07:48:00Z) - Interpretable Models for Granger Causality Using Self-explaining Neural
Networks [4.56877715768796]
We propose a novel framework for inferring Granger causality under nonlinear dynamics based on an extension of self-explaining neural networks.
This framework is more interpretable than other neural-network-based techniques for inferring Granger causality.
arXiv Detail & Related papers (2021-01-19T12:59:00Z) - CASTLE: Regularization via Auxiliary Causal Graph Discovery [89.74800176981842]
We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables.
CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features.
arXiv Detail & Related papers (2020-09-28T09:49:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.