CREIMBO: Cross-Regional Ensemble Interactions in Multi-view Brain Observations
- URL: http://arxiv.org/abs/2405.17395v2
- Date: Sun, 12 Jan 2025 02:26:57 GMT
- Title: CREIMBO: Cross-Regional Ensemble Interactions in Multi-view Brain Observations
- Authors: Noga Mudrik, Ryan Ly, Oliver Ruebel, Adam S. Charles,
- Abstract summary: Current analysis methods often fail to harness the richness of such data.
CREIMBO identifies the hidden composition of per-session neural ensembles through graph-driven dictionary learning.
We demonstrate CREIMBO's ability to recover true components in synthetic data.
- Score: 3.3713037259290255
- License:
- Abstract: Modern recordings of neural activity provide diverse observations of neurons across brain areas, conditions, and subjects; presenting an exciting opportunity to reveal the fundamentals of brain-wide dynamics. Current analysis methods often fail to harness the richness of such data, as they provide either uninterpretable representations or oversimplify models (e.g., by assuming stationary dynamics). Here, instead of regarding asynchronous neural recordings that lack alignment in neural identity or brain areas as a limitation, we leverage these diverse views into the brain to learn a unified model of neural dynamics. We assume that brain activity is driven by multiple hidden global sub-circuits. These sub-circuits represent global basis interactions between neural ensembles -- functional groups of neurons -- such that the time-varying decomposition of these circuits defines how the ensembles' interactions evolve over time non-stationarily. We discover the neural ensembles underlying non-simultaneous observations, along with their non-stationary evolving interactions, with our new model, CREIMBO. CREIMBO identifies the hidden composition of per-session neural ensembles through graph-driven dictionary learning and models the ensemble dynamics on a low-dimensional manifold spanned by a sparse time-varying composition of the global sub-circuits. Thus, CREIMBO disentangles overlapping temporal neural processes while preserving interpretability due to the use of a shared underlying sub-circuit basis. Moreover, CREIMBO distinguishes session-specific computations from global (session-invariant) ones by identifying session covariates and variations in sub-circuit activations. We demonstrate CREIMBO's ability to recover true components in synthetic data, and uncover meaningful brain dynamics including cross-subject neural mechanisms and inter- vs. intra-region dynamical motifs.
Related papers
- Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.
We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.
We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
It has long been known in both neuroscience and AI that ''binding'' between neurons leads to a form of competitive learning.
We introduce Artificial rethinking together with arbitrary connectivity designs such as fully connected convolutional, or attentive mechanisms.
We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, uncertainty, and reasoning.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Latent Representation Learning for Multimodal Brain Activity Translation [14.511112110420271]
We present the Spatiotemporal Alignment of Multimodal Brain Activity (SAMBA) framework, which bridges the spatial and temporal resolution gaps across modalities.
SAMBA introduces a novel attention-based wavelet decomposition for spectral filtering of electrophysiological recordings.
We show that the training of SAMBA, aside from achieving translation, also learns a rich representation of brain information processing.
arXiv Detail & Related papers (2024-09-27T05:50:29Z) - BrainMAE: A Region-aware Self-supervised Learning Framework for Brain Signals [11.030708270737964]
We propose Brain Masked Auto-Encoder (BrainMAE) for learning representations directly from fMRI time-series data.
BrainMAE consistently outperforms established baseline methods by significant margins in four distinct downstream tasks.
arXiv Detail & Related papers (2024-06-24T19:16:24Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - DSAM: A Deep Learning Framework for Analyzing Temporal and Spatial Dynamics in Brain Networks [4.041732967881764]
Most rs-fMRI studies compute a single static functional connectivity matrix across brain regions of interest.
These approaches are at risk of oversimplifying brain dynamics and lack proper consideration of the goal at hand.
We propose a novel interpretable deep learning framework that learns goal-specific functional connectivity matrix directly from time series.
arXiv Detail & Related papers (2024-05-19T23:35:06Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Two-compartment neuronal spiking model expressing brain-state specific apical-amplification, -isolation and -drive regimes [0.7255608805275865]
Brain-state-specific neural mechanisms play a crucial role in integrating past and contextual knowledge with the current, incoming flow of evidence.
This work aims to provide a two-compartment spiking neuron model that incorporates features essential for supporting brain-state-specific learning.
arXiv Detail & Related papers (2023-11-10T14:16:46Z) - Astrocytes as a mechanism for meta-plasticity and contextually-guided
network function [2.66269503676104]
Astrocytes are a ubiquitous and enigmatic type of non-neuronal cell.
Astrocytes may play a more direct and active role in brain function and neural computation.
arXiv Detail & Related papers (2023-11-06T20:31:01Z) - DBGDGM: Dynamic Brain Graph Deep Generative Model [63.23390833353625]
Graphs are a natural representation of brain activity derived from functional magnetic imaging (fMRI) data.
It is well known that clusters of anatomical brain regions, known as functional connectivity networks (FCNs), encode temporal relationships which can serve as useful biomarkers for understanding brain function and dysfunction.
Previous works, however, ignore the temporal dynamics of the brain and focus on static graphs.
We propose a dynamic brain graph deep generative model (DBGDGM) which simultaneously clusters brain regions into temporally evolving communities and learns dynamic unsupervised node embeddings.
arXiv Detail & Related papers (2023-01-26T20:45:30Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.