Probing Brain Context-Sensitivity with Masked-Attention Generation
- URL: http://arxiv.org/abs/2305.13863v1
- Date: Tue, 23 May 2023 09:36:21 GMT
- Title: Probing Brain Context-Sensitivity with Masked-Attention Generation
- Authors: Alexandre Pasquiou, Yair Lakretz, Bertrand Thirion, Christophe Pallier
- Abstract summary: We use GPT-2 transformers to generate word embeddings that capture a fixed amount of contextual information.
We then tested whether these embeddings could predict fMRI brain activity in humans listening to naturalistic text.
- Score: 87.31930367845125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Two fundamental questions in neurolinguistics concerns the brain regions that
integrate information beyond the lexical level, and the size of their window of
integration. To address these questions we introduce a new approach named
masked-attention generation. It uses GPT-2 transformers to generate word
embeddings that capture a fixed amount of contextual information. We then
tested whether these embeddings could predict fMRI brain activity in humans
listening to naturalistic text. The results showed that most of the cortex
within the language network is sensitive to contextual information, and that
the right hemisphere is more sensitive to longer contexts than the left.
Masked-attention generation supports previous analyses of context-sensitivity
in the brain, and complements them by quantifying the window size of context
integration per voxel.
Related papers
- Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - Adaptive Critical Subgraph Mining for Cognitive Impairment Conversion Prediction with T1-MRI-based Brain Network [4.835051121929712]
Prediction the conversion to early-stage dementia is critical for mitigating its progression.
Traditional T1-weighted magnetic resonance imaging (T1-MRI) research focus on identifying brain atrophy regions.
Brain-SubGNN is a novel graph representation network to mine and enhance critical subgraphs based on T1-MRI.
arXiv Detail & Related papers (2024-03-20T06:46:01Z) - Chat2Brain: A Method for Mapping Open-Ended Semantic Queries to Brain
Activation Maps [59.648646222905235]
We propose a method called Chat2Brain that combines LLMs to basic text-2-image model, known as Text2Brain, to map semantic queries to brain activation maps.
We demonstrate that Chat2Brain can synthesize plausible neural activation patterns for more complex tasks of text queries.
arXiv Detail & Related papers (2023-09-10T13:06:45Z) - Coupling Artificial Neurons in BERT and Biological Neurons in the Human
Brain [9.916033214833407]
This study introduces a novel, general, and effective framework to link transformer-based NLP models and neural activities in response to language.
Our experimental results demonstrate 1) The activations of ANs and BNs are significantly synchronized; 2) the ANs carry meaningful linguistic/semantic information and anchor to their BN signatures; 3) the anchored BNs are interpretable in a neurolinguistic context.
arXiv Detail & Related papers (2023-03-27T01:41:48Z) - Information-Restricted Neural Language Models Reveal Different Brain
Regions' Sensitivity to Semantics, Syntax and Context [87.31930367845125]
We trained a lexical language model, Glove, and a supra-lexical language model, GPT-2, on a text corpus.
We then assessed to what extent these information-restricted models were able to predict the time-courses of fMRI signal of humans listening to naturalistic text.
Our analyses show that, while most brain regions involved in language are sensitive to both syntactic and semantic variables, the relative magnitudes of these effects vary a lot across these regions.
arXiv Detail & Related papers (2023-02-28T08:16:18Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - A Transformer-based Neural Language Model that Synthesizes Brain
Activation Maps from Free-Form Text Queries [37.322245313730654]
Text2Brain is an easy to use tool for synthesizing brain activation maps from open-ended text queries.
Text2Brain was built on a transformer-based neural network language model and a coordinate-based meta-analysis of neuroimaging studies.
arXiv Detail & Related papers (2022-07-24T09:15:03Z) - Toward a realistic model of speech processing in the brain with
self-supervised learning [67.7130239674153]
Self-supervised algorithms trained on the raw waveform constitute a promising candidate.
We show that Wav2Vec 2.0 learns brain-like representations with as little as 600 hours of unlabelled speech.
arXiv Detail & Related papers (2022-06-03T17:01:46Z) - Neural Language Taskonomy: Which NLP Tasks are the most Predictive of
fMRI Brain Activity? [3.186888145772382]
Several popular Transformer based language models have been found to be successful for text-driven brain encoding.
In this work, we explore transfer learning from representations learned for ten popular natural language processing tasks.
Experiments across all 10 task representations provide the following cognitive insights.
arXiv Detail & Related papers (2022-05-03T10:23:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.