What does it mean to understand language?
- URL: http://arxiv.org/abs/2511.19757v1
- Date: Mon, 24 Nov 2025 22:21:55 GMT
- Title: What does it mean to understand language?
- Authors: Colton Casto, Anna Ivanova, Evelina Fedorenko, Nancy Kanwisher,
- Abstract summary: We propose that because processing within the brain's core language system is fundamentally limited, deeply understanding language requires exporting information from the language system to other brain regions.<n>We argue that recent progress in cognitive neuroscience provides both the conceptual foundation and the methods to directly test it, thus opening up a new strategy to reveal what it means, cognitively and neurally, to understand language.
- Score: 7.1143325029431415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Language understanding entails not just extracting the surface-level meaning of the linguistic input, but constructing rich mental models of the situation it describes. Here we propose that because processing within the brain's core language system is fundamentally limited, deeply understanding language requires exporting information from the language system to other brain regions that compute perceptual and motor representations, construct mental models, and store our world knowledge and autobiographical memories. We review the existing evidence for this hypothesis, and argue that recent progress in cognitive neuroscience provides both the conceptual foundation and the methods to directly test it, thus opening up a new strategy to reveal what it means, cognitively and neurally, to understand language.
Related papers
- Far from the Shallow: Brain-Predictive Reasoning Embedding through Residual Disentanglement [43.96899536703126]
Modern large language models (LLMs) are increasingly used to model neural responses to language.<n>Their internal representations are highly "entangled," mixing information about lexicon, syntax, meaning, and reasoning.<n>This entanglement biases conventional brain encoding analyses toward linguistically shallow features.
arXiv Detail & Related papers (2025-10-26T22:46:26Z) - Language models align with brain regions that represent concepts across modalities [41.64161126642105]
We investigate the relationship between language models (LMs) and two neural metrics.<n>Our experiments show that both language-only and language-vision models predict the signal better in more meaning-consistent areas of the brain.
arXiv Detail & Related papers (2025-08-15T15:32:19Z) - Decoding Linguistic Representations of Human Brain [21.090956290947275]
We present a taxonomy of brain-to-language decoding of both textual and speech formats.
This work integrates two types of research: neuroscience focusing on language understanding and deep learning-based brain decoding.
arXiv Detail & Related papers (2024-07-30T07:55:44Z) - Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Retentive or Forgetful? Diving into the Knowledge Memorizing Mechanism
of Language Models [49.39276272693035]
Large-scale pre-trained language models have shown remarkable memorizing ability.
Vanilla neural networks without pre-training have been long observed suffering from the catastrophic forgetting problem.
We find that 1) Vanilla language models are forgetful; 2) Pre-training leads to retentive language models; 3) Knowledge relevance and diversification significantly influence the memory formation.
arXiv Detail & Related papers (2023-05-16T03:50:38Z) - Language Cognition and Language Computation -- Human and Machine
Language Understanding [51.56546543716759]
Language understanding is a key scientific issue in the fields of cognitive and computer science.
Can a combination of the disciplines offer new insights for building intelligent language models?
arXiv Detail & Related papers (2023-01-12T02:37:00Z) - Training language models for deeper understanding improves brain
alignment [5.678337324555035]
Building systems that achieve a deeper understanding of language is one of the central goals of natural language processing (NLP)
We show that training language models for deeper narrative understanding results in richer representations that have improved alignment to human brain activity.
arXiv Detail & Related papers (2022-12-21T10:15:19Z) - Joint processing of linguistic properties in brains and language models [14.997785690790032]
We investigate the correspondence between the detailed processing of linguistic information by the human brain versus language models.
We find that elimination of specific linguistic properties results in a significant decrease in brain alignment.
These findings provide clear evidence for the role of specific linguistic information in the alignment between brain and language models.
arXiv Detail & Related papers (2022-12-15T19:13:42Z) - Imagination-Augmented Natural Language Understanding [71.51687221130925]
We introduce an Imagination-Augmented Cross-modal (iACE) to solve natural language understanding tasks.
iACE enables visual imagination with external knowledge transferred from the powerful generative and pre-trained vision-and-language models.
Experiments on GLUE and SWAG show that iACE achieves consistent improvement over visually-supervised pre-trained models.
arXiv Detail & Related papers (2022-04-18T19:39:36Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.