Granger Causality using Neural Networks
- URL: http://arxiv.org/abs/2208.03703v1
- Date: Sun, 7 Aug 2022 12:02:48 GMT
- Title: Granger Causality using Neural Networks
- Authors: Samuel Horvath, Malik Shahid Sultan and Hernando Ombao
- Abstract summary: We present several new classes of models that can handle underlying non-linearity.
We show one can directly decouple lags and individual time series importance via decoupled penalties.
We also show one can directly decouple lags and individual time series importance via decoupled penalties.
- Score: 8.835231777363399
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Granger Causality (GC) test is a famous statistical hypothesis test for
investigating if the past of one time series affects the future of the other.
It helps in answering the question whether one time series is helpful in
forecasting. Standard traditional approaches to Granger causality detection
commonly assume linear dynamics, but such simplification does not hold in many
real-world applications, e.g., neuroscience or genomics that are inherently
non-linear. In such cases, imposing linear models such as Vector Autoregressive
(VAR) models can lead to inconsistent estimation of true Granger Causal
interactions. Machine Learning (ML) can learn the hidden patterns in the
datasets specifically Deep Learning (DL) has shown tremendous promise in
learning the non-linear dynamics of complex systems. Recent work of Tank et al
propose to overcome the issue of linear simplification in VAR models by using
neural networks combined with sparsity-inducing penalties on the learn-able
weights. In this work, we build upon ideas introduced by Tank et al. We propose
several new classes of models that can handle underlying non-linearity.
Firstly, we present the Learned Kernal VAR(LeKVAR) model-an extension of VAR
models that also learns kernel parametrized by a neural net. Secondly, we show
one can directly decouple lags and individual time series importance via
decoupled penalties. This decoupling provides better scaling and allows us to
embed lag selection into RNNs. Lastly, we propose a new training algorithm that
supports mini-batching, and it is compatible with commonly used adaptive
optimizers such as Adam.he proposed techniques are evaluated on several
simulated datasets inspired by real-world applications.We also apply these
methods to the Electro-Encephalogram (EEG) data for an epilepsy patient to
study the evolution of GC before , during and after seizure across the 19 EEG
channels.
Related papers
- Learning from Linear Algebra: A Graph Neural Network Approach to Preconditioner Design for Conjugate Gradient Solvers [42.69799418639716]
Deep learning models may be used to precondition residuals during iteration of such linear solvers as the conjugate gradient (CG) method.
Neural network models require an enormous number of parameters to approximate well in this setup.
In our work, we recall well-established preconditioners from linear algebra and use them as a starting point for training the GNN.
arXiv Detail & Related papers (2024-05-24T13:44:30Z) - Scalable Bayesian Inference in the Era of Deep Learning: From Gaussian Processes to Deep Neural Networks [0.5827521884806072]
Large neural networks trained on large datasets have become the dominant paradigm in machine learning.
This thesis develops scalable methods to equip neural networks with model uncertainty.
arXiv Detail & Related papers (2024-04-29T23:38:58Z) - Diffusion-Model-Assisted Supervised Learning of Generative Models for
Density Estimation [10.793646707711442]
We present a framework for training generative models for density estimation.
We use the score-based diffusion model to generate labeled data.
Once the labeled data are generated, we can train a simple fully connected neural network to learn the generative model in the supervised manner.
arXiv Detail & Related papers (2023-10-22T23:56:19Z) - Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs [50.25683648762602]
We introduce Koopman VAE, a new generative framework that is based on a novel design for the model prior.
Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map.
KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks.
arXiv Detail & Related papers (2023-10-04T07:14:43Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - What learning algorithm is in-context learning? Investigations with
linear models [87.91612418166464]
We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly.
We show that trained in-context learners closely match the predictors computed by gradient descent, ridge regression, and exact least-squares regression.
Preliminary evidence that in-context learners share algorithmic features with these predictors.
arXiv Detail & Related papers (2022-11-28T18:59:51Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Neural Jump Ordinary Differential Equations: Consistent Continuous-Time
Prediction and Filtering [6.445605125467574]
We introduce the Neural Jump ODE (NJ-ODE) that provides a data-driven approach to learn, continuously in time.
We show that our model converges to the $L2$-optimal online prediction.
We experimentally show that our model outperforms the baselines in more complex learning tasks.
arXiv Detail & Related papers (2020-06-08T16:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.