Coherence in the brain unfolds across separable temporal regimes
- URL: http://arxiv.org/abs/2512.20481v3
- Date: Thu, 25 Dec 2025 07:05:25 GMT
- Title: Coherence in the brain unfolds across separable temporal regimes
- Authors: Davide Staub, Finn Rabe, Akhil Misra, Yves Pauli, Roya Hüppi, Ni Yang, Nils Lang, Lars Michels, Victoria Edkins, Sascha Frühholz, Iris Sommer, Wolfram Hinzen, Philipp Homan,
- Abstract summary: Coherence in language requires the brain to satisfy two competing temporal demands.<n>We show that coherence is implemented through dissociable neural regimes of slow contextual integration and rapid event-driven reconfiguration.
- Score: 1.3874648807526748
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Coherence in language requires the brain to satisfy two competing temporal demands: gradual accumulation of meaning across extended context and rapid reconfiguration of representations at event boundaries. Despite their centrality to language and thought, how these processes are implemented in the human brain during naturalistic listening remains unclear. Here, we tested whether these two processes can be captured by annotation-free drift and shift signals and whether their neural expression dissociates across large-scale cortical systems. These signals were derived from a large language model (LLM) and formalized contextual drift and event shifts directly from the narrative input. To enable high-precision voxelwise encoding models with stable parameter estimates, we densely sampled one healthy adult across more than 7 hours of listening to thirteen crime stories while collecting ultra high-field (7T) BOLD data. We then modeled the feature-informed hemodynamic response using a regularized encoding framework validated on independent stories. Drift predictions were prevalent in default-mode network hubs, whereas shift predictions were evident bilaterally in the primary auditory cortex and language association cortex. Furthermore, activity in default-mode and parietal networks was best explained by a signal capturing how meaning accumulates and gradually fades over the course of the narrative. Together, these findings show that coherence during language comprehension is implemented through dissociable neural regimes of slow contextual integration and rapid event-driven reconfiguration, offering a mechanistic entry point for understanding disturbances of language coherence in psychiatric disorders.
Related papers
- Revisiting Modality Invariance in a Multilingual Speech-Text Model via Neuron-Level Analysis [15.638379666159127]
We investigate where language and modality information is encoded, how selective neurons causally influence decoding, and how concentrated this influence is across the network.<n>We identify language- and modality-selective neurons using average-precision ranking, investigate their functional role via median-replacement interventions at inference time, and analyze activation-magnitude inequality across languages and modalities.
arXiv Detail & Related papers (2026-01-24T09:22:18Z) - Decoding Predictive Inference in Visual Language Processing via Spatiotemporal Neural Coherence [2.208251557767776]
We present a machine learning framework for decoding neural responses to visual language stimuli in Deaf signers.<n>Our results reveal distributed left-hemispheric and low-frequency coherence as key features in language comprehension.<n>This work demonstrates a novel approach for probing experience-driven generative models of perception in the brain.
arXiv Detail & Related papers (2025-12-24T04:19:20Z) - A Convolutional Framework for Mapping Imagined Auditory MEG into Listened Brain Responses [0.0]
We present a Magnetoencephalography (MEG) dataset collected from trained musicians as they imagined and listened to musical and poetic stimuli.<n>We show that both imagined and perceived brain responses contain consistent, condition-specific information.
arXiv Detail & Related papers (2025-12-03T05:23:10Z) - Priors in Time: Missing Inductive Biases for Language Model Interpretability [58.07412640266836]
We show that Sparse Autoencoders impose priors that assume independence of concepts across time, implying stationarity.<n>We introduce a new interpretability objective -- Temporal Feature Analysis -- which possesses a temporal inductive bias to decompose representations at a given time into two parts.<n>Our results underscore the need for inductive biases that match the data in designing robust interpretability tools.
arXiv Detail & Related papers (2025-11-03T18:43:48Z) - Far from the Shallow: Brain-Predictive Reasoning Embedding through Residual Disentanglement [43.96899536703126]
Modern large language models (LLMs) are increasingly used to model neural responses to language.<n>Their internal representations are highly "entangled," mixing information about lexicon, syntax, meaning, and reasoning.<n>This entanglement biases conventional brain encoding analyses toward linguistically shallow features.
arXiv Detail & Related papers (2025-10-26T22:46:26Z) - Chronological Thinking in Full-Duplex Spoken Dialogue Language Models [66.84843878538207]
Chronological Thinking aims to improve response quality in full SDLMs.<n>No additional latency: once the user stops speaking, the agent halts thinking and begins speaking without further delay.<n>Results: Experiments demonstrate the effectiveness of chronological thinking through both objective metrics and human evaluations.
arXiv Detail & Related papers (2025-10-02T10:28:11Z) - Mechanistic Understanding and Mitigation of Language Confusion in English-Centric Large Language Models [56.61984030508691]
We present the first mechanistic interpretability study of language confusion.<n>We show that confusion points (CPs) are central to this phenomenon.<n>We show that editing a small set of critical neurons, identified via comparative analysis with a multilingual-tuned counterpart, substantially mitigates confusion.
arXiv Detail & Related papers (2025-05-22T11:29:17Z) - Detecting Neurocognitive Disorders through Analyses of Topic Evolution and Cross-modal Consistency in Visual-Stimulated Narratives [83.15653194899126]
Early detection of neurocognitive disorders (NCDs) is crucial for timely intervention and disease management.<n>Current VSN-based NCD detection methods primarily focus on linguistic microstructures closely tied to bottom-up, stimulus-driven cognitive processes.<n>We propose two novel macrostructural approaches: a Dynamic Topic Model (DTM) to track topic evolution over time, and a Text-Image Temporal Alignment Network (TITAN) to measure cross-modal consistency between narrative and visual stimuli.
arXiv Detail & Related papers (2025-01-07T12:16:26Z) - Developmental Predictive Coding Model for Early Infancy Mono and Bilingual Vocal Continual Learning [69.8008228833895]
We propose a small-sized generative neural network equipped with a continual learning mechanism.<n>Our model prioritizes interpretability and demonstrates the advantages of online learning.
arXiv Detail & Related papers (2024-12-23T10:23:47Z) - Bridging Auditory Perception and Language Comprehension through MEG-Driven Encoding Models [0.12289361708127873]
We use Magnetoencephalography (MEG) data to analyze brain responses to spoken language stimuli.<n>We develop two distinct encoding models: an audio-to-MEG encoder, and a text-to-MEG encoder.<n>Both models successfully predict neural activity, demonstrating significant correlations between estimated and observed MEG signals.
arXiv Detail & Related papers (2024-12-22T19:41:54Z) - Decoding Continuous Character-based Language from Non-invasive Brain Recordings [33.11373366800627]
We propose a novel approach to decoding continuous language from single-trial non-invasive fMRI recordings.
A character-based decoder is designed for the semantic reconstruction of continuous language characterized by inherent character structures.
The ability to decode continuous language from single trials across subjects demonstrates the promising applications of non-invasive language brain-computer interfaces.
arXiv Detail & Related papers (2024-03-17T12:12:33Z) - Decoding speech perception from non-invasive brain recordings [48.46819575538446]
We introduce a model trained with contrastive-learning to decode self-supervised representations of perceived speech from non-invasive recordings.
Our model can identify, from 3 seconds of MEG signals, the corresponding speech segment with up to 41% accuracy out of more than 1,000 distinct possibilities.
arXiv Detail & Related papers (2022-08-25T10:01:43Z) - Mechanisms for Handling Nested Dependencies in Neural-Network Language
Models and Humans [75.15855405318855]
We studied whether a modern artificial neural network trained with "deep learning" methods mimics a central aspect of human sentence processing.
Although the network was solely trained to predict the next word in a large corpus, analysis showed the emergence of specialized units that successfully handled local and long-distance syntactic agreement.
We tested the model's predictions in a behavioral experiment where humans detected violations in number agreement in sentences with systematic variations in the singular/plural status of multiple nouns.
arXiv Detail & Related papers (2020-06-19T12:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.