STN: a new tensor network method to identify stimulus category from
brain activity pattern
- URL: http://arxiv.org/abs/2210.16993v2
- Date: Thu, 3 Nov 2022 01:35:43 GMT
- Title: STN: a new tensor network method to identify stimulus category from
brain activity pattern
- Authors: Chunyu Liu and Jiacai Zhang
- Abstract summary: This research proposes a stimulus constrain tensor brain model(STN), which involved the tensor decomposition idea and stimulus category constraint information.
The experimental results show that the STN model achieved more 11.06% and 18.46% compared with others methods on two modal data sets.
- Score: 4.9915703454549565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural decoding is still a challenge and hot topic in neurocomputing science.
Recently, many studies have shown that brain network pattern containing rich
spatial and temporal structure information, which represented the activation
information of brain under external stimuli. The traditional method is to
extract brain network features directly from the common machine learning
method, then put these features into the classifier, and realize to decode
external stimuli. However, this method cannot effectively extract the
multi-dimensional structural information, which is hidden in the brain network.
The tensor researchers show that the tensor decomposition model can fully mine
unique spatio-temporal structure characteristics in multi-dimensional structure
data. This research proposed a stimulus constrain tensor brain model(STN),
which involved the tensor decomposition idea and stimulus category constraint
information. The model was verified on the real neuroimaging data sets (MEG and
fMRI). The experimental results show that the STN model achieved more 11.06%
and 18.46% compared with others methods on two modal data sets. These results
imply the superiority of extracting discriminative characteristics about STN
model, especially for decoding object stimuli with semantic information.
Related papers
- Revealing Neurocognitive and Behavioral Patterns by Unsupervised Manifold Learning from Dynamic Brain Data [29.522638794625536]
This paper introduces a generalizable unsupervised deep manifold learning for exploration of neurocognitive and behavioral patterns.<n>The proposed Brain-dynamic Convolutional-Network-based Embedding (BCNE) seeks to capture the brain-state trajectories by deciphering temporospatial correlations within the data.<n>The results, both visual and quantitative, reveal a diverse array of intriguing and interpretable patterns.
arXiv Detail & Related papers (2025-08-07T23:36:52Z) - Concept-Guided Interpretability via Neural Chunking [54.73787666584143]
We show that neural networks exhibit patterns in their raw population activity that mirror regularities in the training data.<n>We propose three methods to extract these emerging entities, complementing each other based on label availability and dimensionality.<n>Our work points to a new direction for interpretability, one that harnesses both cognitive principles and the structure of naturalistic data.
arXiv Detail & Related papers (2025-05-16T13:49:43Z) - A Review of Latent Representation Models in Neuroimaging [0.0]
Latent representation models are designed to reduce high-dimensional neuroimaging data to lower-dimensional latent spaces.
By modeling these latent spaces, researchers hope to gain insights into the biology and function of the brain.
This review discusses how these models are used for clinical applications, like disease diagnosis and progression monitoring, but also for exploring fundamental brain mechanisms.
arXiv Detail & Related papers (2024-12-24T19:12:11Z) - Exploring Behavior-Relevant and Disentangled Neural Dynamics with Generative Diffusion Models [2.600709013150986]
Understanding the neural basis of behavior is a fundamental goal in neuroscience.
Our approach, named BeNeDiff'', first identifies a fine-grained and disentangled neural subspace.
It then employs state-of-the-art generative diffusion models to synthesize behavior videos that interpret the neural dynamics of each latent factor.
arXiv Detail & Related papers (2024-10-12T18:28:56Z) - Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - NeuroMoCo: A Neuromorphic Momentum Contrast Learning Method for Spiking Neural Networks [18.038225756466844]
This paper introduces Neuromorphic Momentum Contrast Learning (NeuroMoCo) for brain-inspired spiking neural networks (SNNs)
This is the first time that self-supervised learning (SSL) based on momentum contrastive learning is realized in SNNs.
experiments on DVS-CI10, DVS128Gesture and N-Caltech101 have shown that NeuroMoCo establishes new state-of-the-art (SOTA) benchmarks.
arXiv Detail & Related papers (2024-06-10T14:20:48Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Deep Latent Variable Modeling of Physiological Signals [0.8702432681310401]
We explore high-dimensional problems related to physiological monitoring using latent variable models.
First, we present a novel deep state-space model to generate electrical waveforms of the heart using optically obtained signals as inputs.
Second, we present a brain signal modeling scheme that combines the strengths of probabilistic graphical models and deep adversarial learning.
Third, we propose a framework for the joint modeling of physiological measures and behavior.
arXiv Detail & Related papers (2024-05-29T17:07:33Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - DSAM: A Deep Learning Framework for Analyzing Temporal and Spatial Dynamics in Brain Networks [4.041732967881764]
Most rs-fMRI studies compute a single static functional connectivity matrix across brain regions of interest.
These approaches are at risk of oversimplifying brain dynamics and lack proper consideration of the goal at hand.
We propose a novel interpretable deep learning framework that learns goal-specific functional connectivity matrix directly from time series.
arXiv Detail & Related papers (2024-05-19T23:35:06Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.