A Transformer-based Neural Language Model that Synthesizes Brain
Activation Maps from Free-Form Text Queries
- URL: http://arxiv.org/abs/2208.00840v1
- Date: Sun, 24 Jul 2022 09:15:03 GMT
- Title: A Transformer-based Neural Language Model that Synthesizes Brain
Activation Maps from Free-Form Text Queries
- Authors: Gia H. Ngo, Minh Nguyen, Nancy F. Chen, Mert R. Sabuncu
- Abstract summary: Text2Brain is an easy to use tool for synthesizing brain activation maps from open-ended text queries.
Text2Brain was built on a transformer-based neural network language model and a coordinate-based meta-analysis of neuroimaging studies.
- Score: 37.322245313730654
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuroimaging studies are often limited by the number of subjects and
cognitive processes that can be feasibly interrogated. However, a rapidly
growing number of neuroscientific studies have collectively accumulated an
extensive wealth of results. Digesting this growing literature and obtaining
novel insights remains to be a major challenge, since existing meta-analytic
tools are constrained to keyword queries. In this paper, we present Text2Brain,
an easy to use tool for synthesizing brain activation maps from open-ended text
queries. Text2Brain was built on a transformer-based neural network language
model and a coordinate-based meta-analysis of neuroimaging studies. Text2Brain
combines a transformer-based text encoder and a 3D image generator, and was
trained on variable-length text snippets and their corresponding activation
maps sampled from 13,000 published studies. In our experiments, we demonstrate
that Text2Brain can synthesize meaningful neural activation patterns from
various free-form textual descriptions. Text2Brain is available at
https://braininterpreter.com as a web-based tool for efficiently searching
through the vast neuroimaging literature and generating new hypotheses.
Related papers
- Decoding Linguistic Representations of Human Brain [21.090956290947275]
We present a taxonomy of brain-to-language decoding of both textual and speech formats.
This work integrates two types of research: neuroscience focusing on language understanding and deep learning-based brain decoding.
arXiv Detail & Related papers (2024-07-30T07:55:44Z) - Language Reconstruction with Brain Predictive Coding from fMRI Data [28.217967547268216]
Theory of predictive coding suggests that human brain naturally engages in continuously predicting future word representations.
textscPredFT achieves current state-of-the-art decoding performance with a maximum BLEU-1 score of $27.8%$.
arXiv Detail & Related papers (2024-05-19T16:06:02Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Chat2Brain: A Method for Mapping Open-Ended Semantic Queries to Brain
Activation Maps [59.648646222905235]
We propose a method called Chat2Brain that combines LLMs to basic text-2-image model, known as Text2Brain, to map semantic queries to brain activation maps.
We demonstrate that Chat2Brain can synthesize plausible neural activation patterns for more complex tasks of text queries.
arXiv Detail & Related papers (2023-09-10T13:06:45Z) - Multimodal Neurons in Pretrained Text-Only Transformers [52.20828443544296]
We identify "multimodal neurons" that convert visual representations into corresponding text.
We show that multimodal neurons operate on specific visual concepts across inputs, and have a systematic causal effect on image captioning.
arXiv Detail & Related papers (2023-08-03T05:27:12Z) - Probing Brain Context-Sensitivity with Masked-Attention Generation [87.31930367845125]
We use GPT-2 transformers to generate word embeddings that capture a fixed amount of contextual information.
We then tested whether these embeddings could predict fMRI brain activity in humans listening to naturalistic text.
arXiv Detail & Related papers (2023-05-23T09:36:21Z) - BrainBERT: Self-supervised representation learning for intracranial
recordings [18.52962864519609]
We create a reusable Transformer, BrainBERT, for intracranial recordings bringing modern representation learning approaches to neuroscience.
Much like in NLP and speech recognition, this Transformer enables classifying complex concepts, with higher accuracy and with much less data.
In the future, far more concepts will be decodable from neural recordings by using representation learning, potentially unlocking the brain like language models unlocked language.
arXiv Detail & Related papers (2023-02-28T07:40:37Z) - Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot
Sentiment Classification [78.120927891455]
State-of-the-art brain-to-text systems have achieved great success in decoding language directly from brain signals using neural networks.
In this paper, we extend the problem to open vocabulary Electroencephalography(EEG)-To-Text Sequence-To-Sequence decoding and zero-shot sentence sentiment classification on natural reading tasks.
Our model achieves a 40.1% BLEU-1 score on EEG-To-Text decoding and a 55.6% F1 score on zero-shot EEG-based ternary sentiment classification, which significantly outperforms supervised baselines.
arXiv Detail & Related papers (2021-12-05T21:57:22Z) - Text2Brain: Synthesis of Brain Activation Maps from Free-form Text Query [28.26166305556377]
Text2Brain is a neural network approach for coordinate-based meta-analysis of neuroimaging studies.
We show that Text2Brain can synthesize anatomically-plausible neural activation patterns from free-form textual descriptions.
arXiv Detail & Related papers (2021-09-28T15:39:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.