Large-scale kernelized GRANGER causality to infer topology of directed
graphs with applications to brain networks
- URL: http://arxiv.org/abs/2011.08261v1
- Date: Mon, 16 Nov 2020 20:30:19 GMT
- Title: Large-scale kernelized GRANGER causality to infer topology of directed
graphs with applications to brain networks
- Authors: M. Ali Vosoughi, Axel Wismuller
- Abstract summary: In large networks with short time-series, topology estimation becomes ill-posed.
The present paper proposes a novel nonlinearity-preserving topology inference method for directed networks with co-evolving nodal processes.
Tests on real datasets from a functional magnetic resonance imaging (fMRI) study demonstrate 96.3 percent accuracy in diagnosis tasks of schizophrenia patients.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph topology inference of network processes with co-evolving and
interacting time-series is crucial for network studies. Vector autoregressive
models (VAR) are popular approaches for topology inference of directed graphs;
however, in large networks with short time-series, topology estimation becomes
ill-posed. The present paper proposes a novel nonlinearity-preserving topology
inference method for directed networks with co-evolving nodal processes that
solves the ill-posedness problem. The proposed method, large-scale kernelized
Granger causality (lsKGC), uses kernel functions to transform data into a
low-dimensional feature space and solves the autoregressive problem in the
feature space, then finds the pre-images in the input space to infer the
topology. Extensive simulations on synthetic datasets with nonlinear and linear
dependencies and known ground-truth demonstrate significant improvement in the
Area Under the receiver operating characteristic Curve ( AUC ) of the receiver
operating characteristic for network recovery compared to existing methods.
Furthermore, tests on real datasets from a functional magnetic resonance
imaging (fMRI) study demonstrate 96.3 percent accuracy in diagnosis tasks of
schizophrenia patients, which is the highest in the literature with only brain
time-series information.
Related papers
- Spatial-Temporal DAG Convolutional Networks for End-to-End Joint
Effective Connectivity Learning and Resting-State fMRI Classification [42.82118108887965]
Building comprehensive brain connectomes has proved to be fundamental importance in resting-state fMRI (rs-fMRI) analysis.
We model the brain network as a directed acyclic graph (DAG) to discover direct causal connections between brain regions.
We propose Spatial-Temporal DAG Convolutional Network (ST-DAGCN) to jointly infer effective connectivity and classify rs-fMRI time series.
arXiv Detail & Related papers (2023-12-16T04:31:51Z) - A Generative Self-Supervised Framework using Functional Connectivity in
fMRI Data [15.211387244155725]
Deep neural networks trained on Functional Connectivity (FC) networks extracted from functional Magnetic Resonance Imaging (fMRI) data have gained popularity.
Recent research on the application of Graph Neural Network (GNN) to FC suggests that exploiting the time-varying properties of the FC could significantly improve the accuracy and interpretability of the model prediction.
High cost of acquiring high-quality fMRI data and corresponding labels poses a hurdle to their application in real-world settings.
We propose a generative SSL approach that is tailored to effectively harnesstemporal information within dynamic FC.
arXiv Detail & Related papers (2023-12-04T16:14:43Z) - Accelerating Scalable Graph Neural Network Inference with Node-Adaptive
Propagation [80.227864832092]
Graph neural networks (GNNs) have exhibited exceptional efficacy in a diverse array of applications.
The sheer size of large-scale graphs presents a significant challenge to real-time inference with GNNs.
We propose an online propagation framework and two novel node-adaptive propagation methods.
arXiv Detail & Related papers (2023-10-17T05:03:00Z) - PINQI: An End-to-End Physics-Informed Approach to Learned Quantitative MRI Reconstruction [0.7199733380797579]
Quantitative Magnetic Resonance Imaging (qMRI) enables the reproducible measurement of biophysical parameters in tissue.
The challenge lies in solving a nonlinear, ill-posed inverse problem to obtain desired tissue parameter maps from acquired raw data.
We propose PINQI, a novel qMRI reconstruction method that integrates the knowledge about the signal, acquisition model, and learned regularization into a single end-to-end trainable neural network.
arXiv Detail & Related papers (2023-06-19T15:37:53Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Fuzzy Attention Neural Network to Tackle Discontinuity in Airway
Segmentation [67.19443246236048]
Airway segmentation is crucial for the examination, diagnosis, and prognosis of lung diseases.
Some small-sized airway branches (e.g., bronchus and terminaloles) significantly aggravate the difficulty of automatic segmentation.
This paper presents an efficient method for airway segmentation, comprising a novel fuzzy attention neural network and a comprehensive loss function.
arXiv Detail & Related papers (2022-09-05T16:38:13Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z) - Estimation of the Mean Function of Functional Data via Deep Neural
Networks [6.230751621285321]
We propose a deep neural network method to perform nonparametric regression for functional data.
The proposed method is applied to analyze positron emission tomography images of patients with Alzheimer disease.
arXiv Detail & Related papers (2020-12-08T17:18:16Z) - A simple normative network approximates local non-Hebbian learning in
the cortex [12.940770779756482]
Neuroscience experiments demonstrate that the processing of sensory inputs by cortical neurons is modulated by instructive signals.
Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data.
Online algorithms can be implemented by neural networks whose synaptic learning rules resemble calcium plateau potential dependent plasticity observed in the cortex.
arXiv Detail & Related papers (2020-10-23T20:49:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.