Identifying Shared Decodable Concepts in the Human Brain Using
Image-Language Foundation Models
- URL: http://arxiv.org/abs/2306.03375v1
- Date: Tue, 6 Jun 2023 03:29:47 GMT
- Title: Identifying Shared Decodable Concepts in the Human Brain Using
Image-Language Foundation Models
- Authors: Cory Efird, Alex Murphy, Joel Zylberberg, Alona Fyshe
- Abstract summary: We introduce a method that takes advantage of high-quality pretrained multimodal representations to explore fine-grained semantic networks in the human brain.
To identify such brain regions, we developed a data-driven approach to uncover visual concepts that are decodable from a massive functional magnetic resonance imaging (fMRI) dataset.
- Score: 2.213723689024101
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a method that takes advantage of high-quality pretrained
multimodal representations to explore fine-grained semantic networks in the
human brain. Previous studies have documented evidence of functional
localization in the brain, with different anatomical regions preferentially
activating for different types of sensory input. Many such localized structures
are known, including the fusiform face area and parahippocampal place area.
This raises the question of whether additional brain regions (or conjunctions
of brain regions) are also specialized for other important semantic concepts.
To identify such brain regions, we developed a data-driven approach to uncover
visual concepts that are decodable from a massive functional magnetic resonance
imaging (fMRI) dataset. Our analysis is broadly split into three sections.
First, a fully connected neural network is trained to map brain responses to
the outputs of an image-language foundation model, CLIP (Radford et al., 2021).
Subsequently, a contrastive-learning dimensionality reduction method reveals
the brain-decodable components of CLIP space. In the final section of our
analysis, we localize shared decodable concepts in the brain using a
voxel-masking optimization method to produce a shared decodable concept (SDC)
space. The accuracy of our procedure is validated by comparing it to previous
localization experiments that identify regions for faces, bodies, and places.
In addition to these concepts, whose corresponding brain regions were already
known, we localize novel concept representations which are shared across
participants to other areas of the human brain. We also demonstrate how this
method can be used to inspect fine-grained semantic networks for individual
participants. We envisage that this extensible method can also be adapted to
explore other questions at the intersection of AI and neuroscience.
Related papers
- Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - Finding Shared Decodable Concepts and their Negations in the Brain [4.111712524255376]
We train a highly accurate contrastive model that maps brain responses during naturalistic image viewing to CLIP embeddings.
We then use a novel adaptation of the DBSCAN clustering algorithm to cluster the parameters of participant-specific contrastive models.
Examining the images most and least associated with each SDC cluster gives us additional insight into the semantic properties of each SDC.
arXiv Detail & Related papers (2024-05-27T21:28:26Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Language Knowledge-Assisted Representation Learning for Skeleton-Based
Action Recognition [71.35205097460124]
How humans understand and recognize the actions of others is a complex neuroscientific problem.
LA-GCN proposes a graph convolution network using large-scale language models (LLM) knowledge assistance.
arXiv Detail & Related papers (2023-05-21T08:29:16Z) - Brain Captioning: Decoding human brain activity into images and text [1.5486926490986461]
We present an innovative method for decoding brain activity into meaningful images and captions.
Our approach takes advantage of cutting-edge image captioning models and incorporates a unique image reconstruction pipeline.
We evaluate our methods using quantitative metrics for both generated captions and images.
arXiv Detail & Related papers (2023-05-19T09:57:19Z) - Semantic Brain Decoding: from fMRI to conceptually similar image
reconstruction of visual stimuli [0.29005223064604074]
We propose a novel approach to brain decoding that also relies on semantic and contextual similarity.
We employ an fMRI dataset of natural image vision and create a deep learning decoding pipeline inspired by the existence of both bottom-up and top-down processes in human vision.
We produce reconstructions of visual stimuli that match the original content very well on a semantic level, surpassing the state of the art in previous literature.
arXiv Detail & Related papers (2022-12-13T16:54:08Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Emotional EEG Classification using Connectivity Features and
Convolutional Neural Networks [81.74442855155843]
We introduce a new classification system that utilizes brain connectivity with a CNN and validate its effectiveness via the emotional video classification.
The level of concentration of the brain connectivity related to the emotional property of the target video is correlated with classification performance.
arXiv Detail & Related papers (2021-01-18T13:28:08Z) - Convolutional Neural Networks for cytoarchitectonic brain mapping at
large scale [0.33727511459109777]
We present a new workflow for mapping cytoarchitectonic areas in large series of cell-body stained histological sections of human postmortem brains.
It is based on a Deep Convolutional Neural Network (CNN), which is trained on a pair of section images with annotations, with a large number of un-annotated sections in between.
The new workflow does not require preceding 3D-reconstruction of sections, and is robust against histological artefacts.
arXiv Detail & Related papers (2020-11-25T16:25:13Z) - CANet: Context Aware Network for 3D Brain Glioma Segmentation [33.34852704111597]
We propose a novel approach named Context-Aware Network (CANet) for brain glioma segmentation.
CANet captures high dimensional and discriminative features with contexts from both the convolutional space and feature interaction graphs.
We evaluate our method using publicly accessible brain glioma segmentation datasets BRATS 2017, BRATS 2018 and BRATS 2019.
arXiv Detail & Related papers (2020-07-15T16:12:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.