Tracing Multilingual Representations in LLMs with Cross-Layer Transcoders
- URL: http://arxiv.org/abs/2511.10840v1
- Date: Thu, 13 Nov 2025 22:51:06 GMT
- Title: Tracing Multilingual Representations in LLMs with Cross-Layer Transcoders
- Authors: Abir Harrasse, Florent Draye, Zhijing Jin, Bernhard Schölkopf,
- Abstract summary: Large Language Models (LLMs) can process many languages, yet how they internally represent this diversity remains unclear.<n>Do they form shared multilingual representations with language-specific decoding, and if so, why does performance still favor the dominant training language?<n>We analyze their internal mechanisms using cross-layer transcoders (CLT) and attribution graphs.
- Score: 51.380449540006985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multilingual Large Language Models (LLMs) can process many languages, yet how they internally represent this diversity remains unclear. Do they form shared multilingual representations with language-specific decoding, and if so, why does performance still favor the dominant training language? To address this, we train a series of LLMs on different mixtures of multilingual data and analyze their internal mechanisms using cross-layer transcoders (CLT) and attribution graphs. Our results provide strong evidence for pivot language representations: the model employs nearly identical representations across languages, while language-specific decoding emerges in later layers. Attribution analyses reveal that decoding relies in part on a small set of high-frequency language features in the final layers, which linearly read out language identity from the first layers in the model. By intervening on these features, we can suppress one language and substitute another in the model's outputs. Finally, we study how the dominant training language influences these mechanisms across attribution graphs and decoding pathways. We argue that understanding this pivot-language mechanism is crucial for improving multilingual alignment in LLMs.
Related papers
- Do Multilingual LLMs have specialized language heads? [0.571097144710995]
This paper explores whether multilingual LLMs have specialized language attention heads for each language.<n>It investigates the possibility of removing language-specific heads for unwanted languages without degrading performance in the targeted languages.
arXiv Detail & Related papers (2026-02-09T13:15:17Z) - Sparse Autoencoders Can Capture Language-Specific Concepts Across Diverse Languages [11.19692440351977]
Existing studies often focus on individual neurons, but their polysemantic nature makes it difficult to isolate language-specific units.<n>We introduce SAE-LAPE, a method based on feature activation probability, to identify language-specific features within the feed-forward network.<n>These features influence the model's multilingual performance and language output and can be used for language identification with performance comparable to fastText.
arXiv Detail & Related papers (2025-07-15T12:00:30Z) - Just Go Parallel: Improving the Multilingual Capabilities of Large Language Models [59.21082876068122]
Large language models (LLMs) have demonstrated impressive translation capabilities even without being explicitly trained on parallel data.<n>Recent work suggests that it is actually caused by incidental bilingual signals present in the training data.<n>Various methods have been proposed to maximize the utility of parallel data to enhance the multilingual capabilities of multilingual encoder-based and encoder-decoder language models.
arXiv Detail & Related papers (2025-06-16T02:21:15Z) - Middle-Layer Representation Alignment for Cross-Lingual Transfer in Fine-Tuned LLMs [20.756383171892608]
Middle layers exhibit strongest potential for cross-lingual alignment.<n>Experiments on slot filling, machine translation, and structured text generation show consistent improvements in cross-lingual transfer.<n>We show that separately trained alignment modules can be merged with existing task-specific modules, improving cross-lingual capabilities without full re-training.
arXiv Detail & Related papers (2025-02-20T18:45:43Z) - Large Language Models Share Representations of Latent Grammatical Concepts Across Typologically Diverse Languages [15.203789021094982]
In large language models (LLMs), how are multiple languages learned and encoded?<n>We train sparse autoencoders on Llama-3-8B and Aya-23-8B, and demonstrate that abstract grammatical concepts are often encoded in feature directions shared across many languages.
arXiv Detail & Related papers (2025-01-10T21:18:21Z) - Crosslingual Capabilities and Knowledge Barriers in Multilingual Large Language Models [62.91524967852552]
Large language models (LLMs) are typically multilingual due to pretraining on diverse multilingual corpora.<n>But can these models relate corresponding concepts across languages, i.e., be crosslingual?<n>This study evaluates state-of-the-art LLMs on inherently crosslingual tasks.
arXiv Detail & Related papers (2024-06-23T15:15:17Z) - How do Large Language Models Handle Multilingualism? [81.15060972112563]
This study explores how large language models (LLMs) handle multilingualism.
LLMs initially understand the query, converting multilingual inputs into English for task-solving.
In the intermediate layers, they employ English for thinking and incorporate multilingual knowledge with self-attention and feed-forward structures.
arXiv Detail & Related papers (2024-02-29T02:55:26Z) - Soft Language Clustering for Multilingual Model Pre-training [57.18058739931463]
We propose XLM-P, which contextually retrieves prompts as flexible guidance for encoding instances conditionally.
Our XLM-P enables (1) lightweight modeling of language-invariant and language-specific knowledge across languages, and (2) easy integration with other multilingual pre-training methods.
arXiv Detail & Related papers (2023-06-13T08:08:08Z) - Breaking Down Multilingual Machine Translation [74.24795388967907]
We show that multilingual training is beneficial to encoders in general, while it only benefits decoders for low-resource languages (LRLs)
Our many-to-one models for high-resource languages and one-to-many models for LRLs outperform the best results reported by Aharoni et al.
arXiv Detail & Related papers (2021-10-15T14:57:12Z) - FILTER: An Enhanced Fusion Method for Cross-lingual Language
Understanding [85.29270319872597]
We propose an enhanced fusion method that takes cross-lingual data as input for XLM finetuning.
During inference, the model makes predictions based on the text input in the target language and its translation in the source language.
To tackle this issue, we propose an additional KL-divergence self-teaching loss for model training, based on auto-generated soft pseudo-labels for translated text in the target language.
arXiv Detail & Related papers (2020-09-10T22:42:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.