Reconstructing the Hemodynamic Response Function via a Bimodal
Transformer
- URL: http://arxiv.org/abs/2306.15971v1
- Date: Wed, 28 Jun 2023 07:15:45 GMT
- Title: Reconstructing the Hemodynamic Response Function via a Bimodal
Transformer
- Authors: Yoni Choukroun, Lior Golgher, Pablo Blinder, Lior Wolf
- Abstract summary: The relationship between blood flow and neuronal activity is widely recognized, with blood flow frequently serving as a surrogate for neuronal activity in fMRI studies.
At the microscopic level, neuronal activity has been shown to influence blood flow in nearby blood vessels.
This study introduces the first predictive model that addresses this issue directly at explicit neuronal population level.
- Score: 71.09149960917813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The relationship between blood flow and neuronal activity is widely
recognized, with blood flow frequently serving as a surrogate for neuronal
activity in fMRI studies. At the microscopic level, neuronal activity has been
shown to influence blood flow in nearby blood vessels. This study introduces
the first predictive model that addresses this issue directly at the explicit
neuronal population level. Using in vivo recordings in awake mice, we employ a
novel spatiotemporal bimodal transformer architecture to infer current blood
flow based on both historical blood flow and ongoing spontaneous neuronal
activity. Our findings indicate that incorporating neuronal activity
significantly enhances the model's ability to predict blood flow values.
Through analysis of the model's behavior, we propose hypotheses regarding the
largely unexplored nature of the hemodynamic response to neuronal activity.
Related papers
- Modeling dynamic neural activity by combining naturalistic video stimuli and stimulus-independent latent factors [5.967290675400836]
We propose a probabilistic model that incorporates video inputs along with stimulus-independent latent factors to capture variability in neuronal responses.
After training and testing our model on mouse V1 neuronal responses, we found that it outperforms video-only models in terms of log-likelihood.
We find that the learned latent factors strongly correlate with mouse behavior, although the model was trained without behavior data.
arXiv Detail & Related papers (2024-10-21T16:01:39Z) - Confidence Regulation Neurons in Language Models [91.90337752432075]
This study investigates the mechanisms by which large language models represent and regulate uncertainty in next-token predictions.
Entropy neurons are characterized by an unusually high weight norm and influence the final layer normalization (LayerNorm) scale to effectively scale down the logits.
token frequency neurons, which we describe here for the first time, boost or suppress each token's logit proportionally to its log frequency, thereby shifting the output distribution towards or away from the unigram distribution.
arXiv Detail & Related papers (2024-06-24T01:31:03Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - STNDT: Modeling Neural Population Activity with a Spatiotemporal
Transformer [19.329190789275565]
We introduce SpatioTemporal Neural Data Transformer (STNDT), an NDT-based architecture that explicitly models responses of individual neurons.
We show that our model achieves state-of-the-art performance on ensemble level in estimating neural activities across four neural datasets.
arXiv Detail & Related papers (2022-06-09T18:54:23Z) - Mesoscopic modeling of hidden spiking neurons [3.6868085124383616]
We use coarse-graining and mean-field approximations to derive a bottom-up, neuronally-grounded latent variable model (neuLVM)
neuLVM can be explicitly mapped to a recurrent, multi-population spiking neural network (SNN)
We show, on synthetic spike trains, that a few observed neurons are sufficient for neuLVM to perform efficient model inversion of large SNNs.
arXiv Detail & Related papers (2022-05-26T17:04:39Z) - Cross-Frequency Coupling Increases Memory Capacity in Oscillatory Neural
Networks [69.42260428921436]
Cross-frequency coupling (CFC) is associated with information integration across populations of neurons.
We construct a model of CFC which predicts a computational role for observed $theta - gamma$ oscillatory circuits in the hippocampus and cortex.
We show that the presence of CFC increases the memory capacity of a population of neurons connected by plastic synapses.
arXiv Detail & Related papers (2022-04-05T17:13:36Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Microvascular Dynamics from 4D Microscopy Using Temporal Segmentation [81.30750944868142]
We are able to track changes in cerebral blood volume over time and identify spontaneous arterial dilations that propagate towards the pial surface.
This new imaging capability is a promising step towards characterizing the hemodynamic response function upon which functional magnetic resonance imaging (fMRI) is based.
arXiv Detail & Related papers (2020-01-14T22:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.