The Temporal Structure of Language Processing in the Human Brain
Corresponds to The Layered Hierarchy of Deep Language Models
- URL: http://arxiv.org/abs/2310.07106v1
- Date: Wed, 11 Oct 2023 01:03:42 GMT
- Title: The Temporal Structure of Language Processing in the Human Brain
Corresponds to The Layered Hierarchy of Deep Language Models
- Authors: Ariel Goldstein, Eric Ham, Mariano Schain, Samuel Nastase, Zaid Zada,
Avigail Dabush, Bobbi Aubrey, Harshvardhan Gazula, Amir Feder, Werner K
Doyle, Sasha Devore, Patricia Dugan, Daniel Friedman, Roi Reichart, Michael
Brenner, Avinatan Hassidim, Orrin Devinsky, Adeen Flinker, Omer Levy, Uri
Hasson
- Abstract summary: We show that the layered hierarchy of Deep Language Models (DLMs) may be used to model the temporal dynamics of language comprehension in the brain.
Our results reveal a connection between human language processing and DLMs, with the DLM's layer-by-layer accumulation of contextual information mirroring the timing of neural activity in high-order language areas.
- Score: 37.605014098041906
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep Language Models (DLMs) provide a novel computational paradigm for
understanding the mechanisms of natural language processing in the human brain.
Unlike traditional psycholinguistic models, DLMs use layered sequences of
continuous numerical vectors to represent words and context, allowing a
plethora of emerging applications such as human-like text generation. In this
paper we show evidence that the layered hierarchy of DLMs may be used to model
the temporal dynamics of language comprehension in the brain by demonstrating a
strong correlation between DLM layer depth and the time at which layers are
most predictive of the human brain. Our ability to temporally resolve
individual layers benefits from our use of electrocorticography (ECoG) data,
which has a much higher temporal resolution than noninvasive methods like fMRI.
Using ECoG, we record neural activity from participants listening to a
30-minute narrative while also feeding the same narrative to a high-performing
DLM (GPT2-XL). We then extract contextual embeddings from the different layers
of the DLM and use linear encoding models to predict neural activity. We first
focus on the Inferior Frontal Gyrus (IFG, or Broca's area) and then extend our
model to track the increasing temporal receptive window along the linguistic
processing hierarchy from auditory to syntactic and semantic areas. Our results
reveal a connection between human language processing and DLMs, with the DLM's
layer-by-layer accumulation of contextual information mirroring the timing of
neural activity in high-order language areas.
Related papers
- How does longer temporal context enhance multimodal narrative video processing in the brain? [39.57117698934923]
This study investigates how the temporal context length of video clips and the narrative-task prompting shape brain-model alignment during naturalistic movie watching.<n>We find that increasing clip duration substantially improves brain alignment for multimodal large language models (MLLMs)<n>Shorter temporal windows align with perceptual and early language regions, while longer windows preferentially align higher-order integrative regions.
arXiv Detail & Related papers (2026-02-07T14:34:00Z) - Language-Specific Layer Matters: Efficient Multilingual Enhancement for Large Vision-Language Models [60.39744129890118]
Large vision-language models (LVLMs) have demonstrated exceptional capabilities in understanding visual information with human languages.<n>In this work, we identify a salient correlation between the multilingual understanding ability of LVLMs and language-specific neuron activations in shallow layers.<n>We introduce PLAST, a training recipe that achieves efficient multilingual enhancement for LVLMs by Precise LAnguage-Specific layers fine-Tuning.
arXiv Detail & Related papers (2025-08-25T18:15:25Z) - DLM-One: Diffusion Language Models for One-Step Sequence Generation [63.43422118066493]
DLM-One is a score-distillation-based framework for one-step sequence generation with continuous diffusion language models.<n>We investigate whether DLM-One can achieve substantial gains in sampling efficiency for language modeling.
arXiv Detail & Related papers (2025-05-30T22:42:23Z) - Do Large Language Models Think Like the Brain? Sentence-Level Evidence from fMRI and Hierarchical Embeddings [28.210559128941593]
This study investigates how hierarchical representations in large language models align with the dynamic neural responses during human sentence comprehension.<n>Results show that improvements in model performance drive the evolution of representational architectures toward brain-like hierarchies.
arXiv Detail & Related papers (2025-05-28T16:40:06Z) - Analysis of Argument Structure Constructions in a Deep Recurrent Language Model [0.0]
We explore the representation and processing of Argument Structure Constructions (ASCs) in a recurrent neural language model.
Our results show that sentence representations form distinct clusters corresponding to the four ASCs across all hidden layers.
This indicates that even a relatively simple, brain-constrained recurrent neural network can effectively differentiate between various construction types.
arXiv Detail & Related papers (2024-08-06T09:27:41Z) - Investigating the Timescales of Language Processing with EEG and Language Models [0.0]
This study explores the temporal dynamics of language processing by examining the alignment between word representations from a pre-trained language model and EEG data.
Using a Temporal Response Function (TRF) model, we investigate how neural activity corresponds to model representations across different layers.
Our analysis reveals patterns in TRFs from distinct layers, highlighting varying contributions to lexical and compositional processing.
arXiv Detail & Related papers (2024-06-28T12:49:27Z) - Brain-Like Language Processing via a Shallow Untrained Multihead Attention Network [16.317199232071232]
Large Language Models (LLMs) have been shown to be effective models of the human language system.
In this work, we investigate the key architectural components driving the surprising alignment of untrained models.
arXiv Detail & Related papers (2024-06-21T12:54:03Z) - Du-IN: Discrete units-guided mask modeling for decoding speech from Intracranial Neural signals [5.283718601431859]
Invasive brain-computer interfaces with Electrocorticography (ECoG) have shown promise for high-performance speech decoding in medical applications.
We developed the Du-IN model, which extracts contextual embeddings based on region-level tokens through discrete codex-guided mask modeling.
Our model achieves state-of-the-art performance on the 61-word classification task, surpassing all baselines.
arXiv Detail & Related papers (2024-05-19T06:00:36Z) - Language-Specific Neurons: The Key to Multilingual Capabilities in Large Language Models [117.20416338476856]
Large language models (LLMs) demonstrate remarkable multilingual capabilities without being pre-trained on specially curated multilingual parallel corpora.
We propose a novel detection method, language activation probability entropy (LAPE), to identify language-specific neurons within LLMs.
Our findings indicate that LLMs' proficiency in processing a particular language is predominantly due to a small subset of neurons.
arXiv Detail & Related papers (2024-02-26T09:36:05Z) - Contextual Feature Extraction Hierarchies Converge in Large Language
Models and the Brain [12.92793034617015]
We show that as large language models (LLMs) achieve higher performance on benchmark tasks, they become more brain-like.
We also show the importance of contextual information in improving model performance and brain similarity.
arXiv Detail & Related papers (2024-01-31T08:48:35Z) - Language Generation from Brain Recordings [68.97414452707103]
We propose a generative language BCI that utilizes the capacity of a large language model and a semantic brain decoder.
The proposed model can generate coherent language sequences aligned with the semantic content of visual or auditory language stimuli.
Our findings demonstrate the potential and feasibility of employing BCIs in direct language generation.
arXiv Detail & Related papers (2023-11-16T13:37:21Z) - Self-supervised models of audio effectively explain human cortical
responses to speech [71.57870452667369]
We capitalize on the progress of self-supervised speech representation learning to create new state-of-the-art models of the human auditory system.
We show that these results show that self-supervised models effectively capture the hierarchy of information relevant to different stages of speech processing in human cortex.
arXiv Detail & Related papers (2022-05-27T22:04:02Z) - Model-based analysis of brain activity reveals the hierarchy of language
in 305 subjects [82.81964713263483]
A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli.
Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli.
arXiv Detail & Related papers (2021-10-12T15:30:21Z) - Mechanisms for Handling Nested Dependencies in Neural-Network Language
Models and Humans [75.15855405318855]
We studied whether a modern artificial neural network trained with "deep learning" methods mimics a central aspect of human sentence processing.
Although the network was solely trained to predict the next word in a large corpus, analysis showed the emergence of specialized units that successfully handled local and long-distance syntactic agreement.
We tested the model's predictions in a behavioral experiment where humans detected violations in number agreement in sentences with systematic variations in the singular/plural status of multiple nouns.
arXiv Detail & Related papers (2020-06-19T12:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.