Linguistics and Human Brain: A Perspective of Computational Neuroscience
- URL: http://arxiv.org/abs/2602.08275v2
- Date: Fri, 13 Feb 2026 13:15:58 GMT
- Title: Linguistics and Human Brain: A Perspective of Computational Neuroscience
- Authors: Fudong Zhang, Bo Chai, Yujie Wu, Wai Ting Siok, Nizhuan Wang,
- Abstract summary: Computational neuroscience formalizes the hierarchical and dynamic structures of language into testable neural models.<n>Recent advances in deep learning have powerfully advanced this pursuit.<n>"Model-brain alignment" framework offers a methodology to evaluate the biological plausibility of language-related theories.
- Score: 2.8285202282959268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Elucidating the language-brain relationship requires bridging the methodological gap between the abstract theoretical frameworks of linguistics and the empirical neural data of neuroscience. Serving as an interdisciplinary cornerstone, computational neuroscience formalizes the hierarchical and dynamic structures of language into testable neural models through modeling, simulation, and data analysis. This enables a computational dialogue between linguistic hypotheses and neural mechanisms. Recent advances in deep learning, particularly large language models (LLMs), have powerfully advanced this pursuit. Their high-dimensional representational spaces provide a novel scale for exploring the neural basis of linguistic processing, while the "model-brain alignment" framework offers a methodology to evaluate the biological plausibility of language-related theories.
Related papers
- Mind Meets Space: Rethinking Agentic Spatial Intelligence from a Neuroscience-inspired Perspective [53.556348738917166]
Recent advances in agentic AI have led to systems capable of autonomous task execution and language-based reasoning.<n>Human spatial intelligence, rooted in integrated multisensory perception, spatial memory, and cognitive maps, enables flexible, context-aware decision-making in unstructured environments.
arXiv Detail & Related papers (2025-09-11T05:23:22Z) - Simulated Language Acquisition in a Biologically Realistic Model of the Brain [0.8287206589886881]
We introduce a simple mathematical formulation of six basic and broadly accepted principles of neuroscience.<n>We implement a simulated neuromorphic system based on this formalism, which is capable of basic language acquisition.<n>We discuss several possible extensions and implications of this result.
arXiv Detail & Related papers (2025-07-15T23:04:44Z) - Do Large Language Models Think Like the Brain? Sentence-Level Evidence from fMRI and Hierarchical Embeddings [28.210559128941593]
This study investigates how hierarchical representations in large language models align with the dynamic neural responses during human sentence comprehension.<n>Results show that improvements in model performance drive the evolution of representational architectures toward brain-like hierarchies.
arXiv Detail & Related papers (2025-05-28T16:40:06Z) - Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Augmenting learning in neuro-embodied systems through neurobiological first principles [42.810158068175646]
We describe recent bioinspired models, learning rules, and architectures for augmenting artificial neural networks.<n>We propose a framework for augmenting ANNs, which has the potential to bridge the gap between neuroscience and AI.<n>We show how integrating biophysical principles into task-driven spiking neural networks and neuromorphic systems provides scalable solutions.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Connecting Neural Response measurements & Computational Models of
language: a non-comprehensive guide [5.523143941738335]
Recent advances in language modelling and in neuroimaging promise potential improvements in the investigation of language's neurobiology.
This survey traces a line from early research linking Event Related Potentials and complexity measures derived from simple language models to contemporary studies employing Artificial Neural Network models trained on large corpora.
arXiv Detail & Related papers (2022-03-10T11:24:54Z) - Model-based analysis of brain activity reveals the hierarchy of language
in 305 subjects [82.81964713263483]
A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli.
Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli.
arXiv Detail & Related papers (2021-10-12T15:30:21Z) - Does injecting linguistic structure into language models lead to better
alignment with brain recordings? [13.880819301385854]
We evaluate whether language models align better with brain recordings if their attention is biased by annotations from syntactic or semantic formalisms.
Our proposed approach enables the evaluation of more targeted hypotheses about the composition of meaning in the brain.
arXiv Detail & Related papers (2021-01-29T14:42:02Z) - Data-driven models and computational tools for neurolinguistics: a
language technology perspective [12.082438928980087]
We present a review of brain imaging-based neurolinguistic studies with a focus on the natural language representations.
Mutual enrichment of neurolinguistics and language technologies leads to development of brain-aware natural language representations.
arXiv Detail & Related papers (2020-03-23T20:41:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.