Estimating Reproducible Functional Networks Associated with Task
Dynamics using Unsupervised LSTMs
- URL: http://arxiv.org/abs/2105.02869v1
- Date: Thu, 6 May 2021 17:53:22 GMT
- Title: Estimating Reproducible Functional Networks Associated with Task
Dynamics using Unsupervised LSTMs
- Authors: Nicha C. Dvornek, Pamela Ventola, and James S. Duncan
- Abstract summary: We propose a method for estimating more reproducible functional networks associated with task activity by using recurrent neural networks with long short term memory (LSTM)
The LSTM model is trained in an unsupervised manner to generate the functional magnetic resonance imaging (fMRI) time-series data in regions of interest.
We demonstrate that the functional networks learned by the LSTM model are more strongly associated with the task activity and dynamics compared to other approaches.
- Score: 4.697267141773321
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method for estimating more reproducible functional networks that
are more strongly associated with dynamic task activity by using recurrent
neural networks with long short term memory (LSTMs). The LSTM model is trained
in an unsupervised manner to learn to generate the functional magnetic
resonance imaging (fMRI) time-series data in regions of interest. The learned
functional networks can then be used for further analysis, e.g., correlation
analysis to determine functional networks that are strongly associated with an
fMRI task paradigm. We test our approach and compare to other methods for
decomposing functional networks from fMRI activity on 2 related but separate
datasets that employ a biological motion perception task. We demonstrate that
the functional networks learned by the LSTM model are more strongly associated
with the task activity and dynamics compared to other approaches. Furthermore,
the patterns of network association are more closely replicated across subjects
within the same dataset as well as across datasets. More reproducible
functional networks are essential for better characterizing the neural
correlates of a target task.
Related papers
- Uncovering cognitive taskonomy through transfer learning in masked autoencoder-based fMRI reconstruction [6.3348067441225915]
We employ the masked autoencoder (MAE) model to reconstruct functional magnetic resonance imaging (fMRI) data.
Our study suggests that the fMRI reconstruction with MAE model can uncover the latent representation.
arXiv Detail & Related papers (2024-05-24T09:29:16Z) - DSAM: A Deep Learning Framework for Analyzing Temporal and Spatial Dynamics in Brain Networks [4.041732967881764]
Most rs-fMRI studies compute a single static functional connectivity matrix across brain regions of interest.
These approaches are at risk of oversimplifying brain dynamics and lack proper consideration of the goal at hand.
We propose a novel interpretable deep learning framework that learns goal-specific functional connectivity matrix directly from time series.
arXiv Detail & Related papers (2024-05-19T23:35:06Z) - Decomposing neural networks as mappings of correlation functions [57.52754806616669]
We study the mapping between probability distributions implemented by a deep feed-forward network.
We identify essential statistics in the data, as well as different information representations that can be used by neural networks.
arXiv Detail & Related papers (2022-02-10T09:30:31Z) - Modeling Spatio-Temporal Dynamics in Brain Networks: A Comparison of
Graph Neural Network Architectures [0.5033155053523041]
Graph neural networks (GNNs) provide a possibility to interpret new structured graph signals.
We show that by learning localized functional interactions on the substrate, GNN based approaches are able to robustly scale to large network studies.
arXiv Detail & Related papers (2021-12-08T12:57:13Z) - Learning Interpretable Models for Coupled Networks Under Domain
Constraints [8.308385006727702]
We investigate the idea of coupled networks by focusing on interactions between structural edges and functional edges of brain networks.
We propose a novel formulation to place hard network constraints on the noise term while estimating interactions.
We validate our method on multishell diffusion and task-evoked fMRI datasets from the Human Connectome Project.
arXiv Detail & Related papers (2021-04-19T06:23:31Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - A journey in ESN and LSTM visualisations on a language task [77.34726150561087]
We trained ESNs and LSTMs on a Cross-Situationnal Learning (CSL) task.
The results are of three kinds: performance comparison, internal dynamics analyses and visualization of latent space.
arXiv Detail & Related papers (2020-12-03T08:32:01Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z) - Object Tracking through Residual and Dense LSTMs [67.98948222599849]
Deep learning-based trackers based on LSTMs (Long Short-Term Memory) recurrent neural networks have emerged as a powerful alternative.
DenseLSTMs outperform Residual and regular LSTM, and offer a higher resilience to nuisances.
Our case study supports the adoption of residual-based RNNs for enhancing the robustness of other trackers.
arXiv Detail & Related papers (2020-06-22T08:20:17Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.