Text2Brain: Synthesis of Brain Activation Maps from Free-form Text Query
- URL: http://arxiv.org/abs/2109.13814v1
- Date: Tue, 28 Sep 2021 15:39:22 GMT
- Title: Text2Brain: Synthesis of Brain Activation Maps from Free-form Text Query
- Authors: Gia H. Ngo and Minh Nguyen and Nancy F. Chen and Mert R. Sabuncu
- Abstract summary: Text2Brain is a neural network approach for coordinate-based meta-analysis of neuroimaging studies.
We show that Text2Brain can synthesize anatomically-plausible neural activation patterns from free-form textual descriptions.
- Score: 28.26166305556377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most neuroimaging experiments are under-powered, limited by the number of
subjects and cognitive processes that an individual study can investigate.
Nonetheless, over decades of research, neuroscience has accumulated an
extensive wealth of results. It remains a challenge to digest this growing
knowledge base and obtain new insights since existing meta-analytic tools are
limited to keyword queries. In this work, we propose Text2Brain, a neural
network approach for coordinate-based meta-analysis of neuroimaging studies to
synthesize brain activation maps from open-ended text queries. Combining a
transformer-based text encoder and a 3D image generator, Text2Brain was trained
on variable-length text snippets and their corresponding activation maps
sampled from 13,000 published neuroimaging studies. We demonstrate that
Text2Brain can synthesize anatomically-plausible neural activation patterns
from free-form textual descriptions of cognitive concepts. Text2Brain is
available at https://braininterpreter.com as a web-based tool for retrieving
established priors and generating new hypotheses for neuroscience research.
Related papers
- Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Decoding Linguistic Representations of Human Brain [21.090956290947275]
We present a taxonomy of brain-to-language decoding of both textual and speech formats.
This work integrates two types of research: neuroscience focusing on language understanding and deep learning-based brain decoding.
arXiv Detail & Related papers (2024-07-30T07:55:44Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Chat2Brain: A Method for Mapping Open-Ended Semantic Queries to Brain
Activation Maps [59.648646222905235]
We propose a method called Chat2Brain that combines LLMs to basic text-2-image model, known as Text2Brain, to map semantic queries to brain activation maps.
We demonstrate that Chat2Brain can synthesize plausible neural activation patterns for more complex tasks of text queries.
arXiv Detail & Related papers (2023-09-10T13:06:45Z) - Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey) [9.14580723964253]
Can we obtain insights about the brain using AI models?
How is the information in deep learning models related to brain recordings?
Decoding models solve the inverse problem of reconstructing stimuli given the fMRI.
Inspired by the effectiveness of deep learning models for natural language processing, computer vision, and speech, several neural encoding and decoding models have been recently proposed.
arXiv Detail & Related papers (2023-07-17T06:54:36Z) - Probing Brain Context-Sensitivity with Masked-Attention Generation [87.31930367845125]
We use GPT-2 transformers to generate word embeddings that capture a fixed amount of contextual information.
We then tested whether these embeddings could predict fMRI brain activity in humans listening to naturalistic text.
arXiv Detail & Related papers (2023-05-23T09:36:21Z) - BrainBERT: Self-supervised representation learning for intracranial
recordings [18.52962864519609]
We create a reusable Transformer, BrainBERT, for intracranial recordings bringing modern representation learning approaches to neuroscience.
Much like in NLP and speech recognition, this Transformer enables classifying complex concepts, with higher accuracy and with much less data.
In the future, far more concepts will be decodable from neural recordings by using representation learning, potentially unlocking the brain like language models unlocked language.
arXiv Detail & Related papers (2023-02-28T07:40:37Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - A Transformer-based Neural Language Model that Synthesizes Brain
Activation Maps from Free-Form Text Queries [37.322245313730654]
Text2Brain is an easy to use tool for synthesizing brain activation maps from open-ended text queries.
Text2Brain was built on a transformer-based neural network language model and a coordinate-based meta-analysis of neuroimaging studies.
arXiv Detail & Related papers (2022-07-24T09:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.