Emotional Brain State Classification on fMRI Data Using Deep Residual
and Convolutional Networks
- URL: http://arxiv.org/abs/2210.17015v1
- Date: Mon, 31 Oct 2022 02:08:02 GMT
- Title: Emotional Brain State Classification on fMRI Data Using Deep Residual
and Convolutional Networks
- Authors: Maxime Tchibozo, Donggeun Kim, Zijing Wang, Xiaofu He
- Abstract summary: We develop two Convolution-based approaches to decode emotional brain states.
These approaches could potentially be used in brain computer interfaces and real-time fMRI neurofeedback research.
- Score: 0.8411385346896412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of emotional brain state classification on functional MRI (fMRI)
data is to recognize brain activity patterns related to specific emotion tasks
performed by subjects during an experiment. Distinguishing emotional brain
states from other brain states using fMRI data has proven to be challenging due
to two factors: a difficulty to generate fast yet accurate predictions in short
time frames, and a difficulty to extract emotion features which generalize to
unseen subjects. To address these challenges, we conducted an experiment in
which 22 subjects viewed pictures designed to stimulate either negative,
neutral or rest emotional responses while their brain activity was measured
using fMRI. We then developed two distinct Convolution-based approaches to
decode emotional brain states using only spatial information from single,
minimally pre-processed (slice timing and realignment) fMRI volumes. In our
first approach, we trained a 1D Convolutional Network (84.9% accuracy; chance
level 33%) to classify 3 emotion conditions using One-way Analysis of Variance
(ANOVA) voxel selection combined with hyperalignment. In our second approach,
we trained a 3D ResNet-50 model (78.0% accuracy; chance level 50%) to classify
2 emotion conditions from single 3D fMRI volumes directly. Our Convolutional
and Residual classifiers successfully learned group-level emotion features and
could decode emotion conditions from fMRI volumes in milliseconds. These
approaches could potentially be used in brain computer interfaces and real-time
fMRI neurofeedback research.
Related papers
- Rest2Visual: Predicting Visually Evoked fMRI from Resting-State Scans [30.743554598059692]
We introduce Rest2Visual, a conditional generative model that predicts visually evoked fMRI (ve-fMRI) from resting-state input and 2D visual stimuli.<n>Our results provide compelling evidence that individualized spontaneous neural activity can be transformed into stimulus-aligned representations.
arXiv Detail & Related papers (2025-09-17T01:08:03Z) - Voxel-Level Brain States Prediction Using Swin Transformer [65.9194533414066]
We propose a novel architecture which employs a 4D Shifted Window (Swin) Transformer as encoder to efficiently learn-temporal information and a convolutional decoder to enable brain state prediction at the same spatial and temporal resolution as the input fMRI data.<n>Our model has shown high accuracy when predicting 7.2s resting-state brain activities based on the prior 23.04s fMRI time series.<n>This shows promising evidence that thetemporal organization of the human brain can be learned by a Swin Transformer model, at high resolution, which provides a potential for reducing fMRI scan time and the development of brain-computer interfaces
arXiv Detail & Related papers (2025-06-13T04:14:38Z) - MindAligner: Explicit Brain Functional Alignment for Cross-Subject Visual Decoding from Limited fMRI Data [64.92867794764247]
MindAligner is a framework for cross-subject brain decoding from limited fMRI data.
Brain Transfer Matrix (BTM) projects the brain signals of an arbitrary new subject to one of the known subjects.
Brain Functional Alignment module is proposed to perform soft cross-subject brain alignment under different visual stimuli.
arXiv Detail & Related papers (2025-02-07T16:01:59Z) - Predicting Human Brain States with Transformer [45.25907962341717]
We show that a self-attention-based model can accurately predict the brain states up to 5.04s with the previous 21.6s.
These promising initial results demonstrate the possibility of developing gen-erative models for fMRI data.
arXiv Detail & Related papers (2024-12-11T00:18:39Z) - NeuroBOLT: Resting-state EEG-to-fMRI Synthesis with Multi-dimensional Feature Mapping [9.423808859117122]
We introduce NeuroBOLT, i.e., Neuro-to-BOLD Transformer, to translate raw EEG data to fMRI activity signals across the brain.
Our experiments demonstrate that NeuroBOLT effectively reconstructs unseen resting-state fMRI signals from primary sensory, high-level cognitive areas, and deep subcortical brain regions.
arXiv Detail & Related papers (2024-10-07T02:47:55Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - MindShot: Brain Decoding Framework Using Only One Image [21.53687547774089]
MindShot is proposed to achieve effective few-shot brain decoding by leveraging cross-subject prior knowledge.
New subjects and pretrained individuals only need to view images of the same semantic class, significantly expanding the model's applicability.
arXiv Detail & Related papers (2024-05-24T07:07:06Z) - Brain3D: Generating 3D Objects from fMRI [76.41771117405973]
We design a novel 3D object representation learning method, Brain3D, that takes as input the fMRI data of a subject.
We show that our model captures the distinct functionalities of each region of human vision system.
Preliminary evaluations indicate that Brain3D can successfully identify the disordered brain regions in simulated scenarios.
arXiv Detail & Related papers (2024-05-24T06:06:11Z) - Animate Your Thoughts: Decoupled Reconstruction of Dynamic Natural Vision from Slow Brain Activity [13.291585611137355]
Reconstructing human dynamic vision from brain activity is a challenging task with great scientific significance.
This paper propose a two-stage model named Mind-Animator, which achieves state-of-the-art performance on three public datasets.
We substantiate that the reconstructed video dynamics are indeed derived from fMRI, rather than hallucinations of the generative model.
arXiv Detail & Related papers (2024-05-06T08:56:41Z) - BrainODE: Dynamic Brain Signal Analysis via Graph-Aided Neural Ordinary Differential Equations [67.79256149583108]
We propose a novel model called BrainODE to achieve continuous modeling of dynamic brain signals.
By learning latent initial values and neural ODE functions from irregular time series, BrainODE effectively reconstructs brain signals at any time point.
arXiv Detail & Related papers (2024-04-30T10:53:30Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Brain-ID: Learning Contrast-agnostic Anatomical Representations for
Brain Imaging [11.06907516321673]
We introduce Brain-ID, an anatomical representation learning model for brain imaging.
With the proposed "mild-to-severe" intrasubject generation, Brain-ID is robust to the subject-specific brain anatomy.
We present new metrics to validate the intra- and inter-subject robustness, and evaluate their performance on four downstream applications.
arXiv Detail & Related papers (2023-11-28T16:16:10Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Contrast, Attend and Diffuse to Decode High-Resolution Images from Brain
Activities [31.448924808940284]
We introduce a two-phase fMRI representation learning framework.
The first phase pre-trains an fMRI feature learner with a proposed Double-contrastive Mask Auto-encoder to learn denoised representations.
The second phase tunes the feature learner to attend to neural activation patterns most informative for visual reconstruction with guidance from an image auto-encoder.
arXiv Detail & Related papers (2023-05-26T19:16:23Z) - Interpretation of 3D CNNs for Brain MRI Data Classification [56.895060189929055]
We extend the previous findings in gender differences from diffusion-tensor imaging on T1 brain MRI scans.
We provide the voxel-wise 3D CNN interpretation comparing the results of three interpretation methods.
arXiv Detail & Related papers (2020-06-20T17:56:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.