Toward Generalizing Visual Brain Decoding to Unseen Subjects
- URL: http://arxiv.org/abs/2410.14445v2
- Date: Mon, 21 Oct 2024 01:45:47 GMT
- Title: Toward Generalizing Visual Brain Decoding to Unseen Subjects
- Authors: Xiangtao Kong, Kexin Huang, Ping Li, Lei Zhang,
- Abstract summary: We first consolidate an image-fMRI dataset consisting of stimulus-image and fMRI-response pairs, involving 177 subjects in the movie-viewing task of the Human Connectome Project (HCP)
We then present a learning paradigm that applies uniform processing across all subjects, instead of employing different network heads or tokenizers for individuals as in previous methods.
Our findings reveal the inherent similarities in brain activities across individuals.
- Score: 20.897856078151506
- License:
- Abstract: Visual brain decoding aims to decode visual information from human brain activities. Despite the great progress, one critical limitation of current brain decoding research lies in the lack of generalization capability to unseen subjects. Prior works typically focus on decoding brain activity of individuals based on the observation that different subjects exhibit different brain activities, while it remains unclear whether brain decoding can be generalized to unseen subjects. This study aims to answer this question. We first consolidate an image-fMRI dataset consisting of stimulus-image and fMRI-response pairs, involving 177 subjects in the movie-viewing task of the Human Connectome Project (HCP). This dataset allows us to investigate the brain decoding performance with the increase of participants. We then present a learning paradigm that applies uniform processing across all subjects, instead of employing different network heads or tokenizers for individuals as in previous methods, which can accommodate a large number of subjects to explore the generalization capability across different subjects. A series of experiments are conducted and we have the following findings. First, the network exhibits clear generalization capabilities with the increase of training subjects. Second, the generalization capability is common to popular network architectures (MLP, CNN and Transformer). Third, the generalization performance is affected by the similarity between subjects. Our findings reveal the inherent similarities in brain activities across individuals. With the emerging of larger and more comprehensive datasets, it is possible to train a brain decoding foundation model in the future. Codes and models can be found at https://github.com/Xiangtaokong/TGBD.
Related papers
- Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - The Wisdom of a Crowd of Brains: A Universal Brain Encoder [10.127005930959823]
We propose a Universal Brain-Encoder, which can be trained jointly on data from many different subjects/datasets/machines.
Our trains to predict the response of each brain-voxel on every image, by directly computing the cross-attention between the brain-voxel embedding and multi-level deep image features.
This voxel-centric architecture allows the functional role of each brain-voxel to naturally emerge from the voxel-image cross-attention.
arXiv Detail & Related papers (2024-06-18T01:17:07Z) - BRACTIVE: A Brain Activation Approach to Human Visual Brain Learning [11.517021103782229]
We introduce Brain Activation Network (BRACTIVE), a transformer-based approach to studying the human visual brain.
The main objective of BRACTIVE is to align the visual features of subjects with corresponding brain representations via fMRI signals.
Our experiments demonstrate that BRACTIVE effectively identifies person-specific regions of interest, such as face and body-selective areas.
arXiv Detail & Related papers (2024-05-29T06:50:13Z) - Wills Aligner: A Robust Multi-Subject Brain Representation Learner [19.538200208523467]
We introduce Wills Aligner, a robust multi-subject brain representation learner.
Wills Aligner initially aligns different subjects' brains at the anatomical level.
It incorporates a mixture of brain experts to learn individual cognition patterns.
arXiv Detail & Related papers (2024-04-20T06:01:09Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Aligning brain functions boosts the decoding of visual semantics in
novel subjects [3.226564454654026]
We propose to boost brain decoding by aligning brain responses to videos and static images across subjects.
Our method improves out-of-subject decoding performance by up to 75%.
It also outperforms classical single-subject approaches when fewer than 100 minutes of data is available for the tested subject.
arXiv Detail & Related papers (2023-12-11T15:55:20Z) - Brain Decodes Deep Nets [9.302098067235507]
We developed a tool for visualizing and analyzing large pre-trained vision models by mapping them onto the brain.
Our innovation arises from a surprising usage of brain encoding: predicting brain fMRI measurements in response to images.
arXiv Detail & Related papers (2023-12-03T04:36:04Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - What Can You Learn from Your Muscles? Learning Visual Representation
from Human Interactions [50.435861435121915]
We use human interaction and attention cues to investigate whether we can learn better representations compared to visual-only representations.
Our experiments show that our "muscly-supervised" representation outperforms a visual-only state-of-the-art method MoCo.
arXiv Detail & Related papers (2020-10-16T17:46:53Z) - Interpretation of 3D CNNs for Brain MRI Data Classification [56.895060189929055]
We extend the previous findings in gender differences from diffusion-tensor imaging on T1 brain MRI scans.
We provide the voxel-wise 3D CNN interpretation comparing the results of three interpretation methods.
arXiv Detail & Related papers (2020-06-20T17:56:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.