Unifying Attention Heads and Task Vectors via Hidden State Geometry in In-Context Learning
- URL: http://arxiv.org/abs/2505.18752v1
- Date: Sat, 24 May 2025 15:42:20 GMT
- Title: Unifying Attention Heads and Task Vectors via Hidden State Geometry in In-Context Learning
- Authors: Haolin Yang, Hakaze Cho, Yiqiao Zhong, Naoya Inoue,
- Abstract summary: In this paper, we analyze two geometric factors that govern performance: the separability and alignment of query hidden states.<n>Previous Token Heads drive separability, while Induction Heads and task vectors enhance alignment.<n>Our findings thus bridge the gap between attention heads and task vectors, offering a unified account of ICL's underlying mechanisms.
- Score: 2.4866936275046405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The unusual properties of in-context learning (ICL) have prompted investigations into the internal mechanisms of large language models. Prior work typically focuses on either special attention heads or task vectors at specific layers, but lacks a unified framework linking these components to the evolution of hidden states across layers that ultimately produce the model's output. In this paper, we propose such a framework for ICL in classification tasks by analyzing two geometric factors that govern performance: the separability and alignment of query hidden states. A fine-grained analysis of layer-wise dynamics reveals a striking two-stage mechanism: separability emerges in early layers, while alignment develops in later layers. Ablation studies further show that Previous Token Heads drive separability, while Induction Heads and task vectors enhance alignment. Our findings thus bridge the gap between attention heads and task vectors, offering a unified account of ICL's underlying mechanisms.
Related papers
- Understanding Task Vectors in In-Context Learning: Emergence, Functionality, and Limitations [19.539276425108987]
This work proposes the Linear Combination Conjecture, positing that task vectors act as single in-context demonstrations formed through linear combinations of the original ones.<n>We show that task vectors naturally emerge in linear transformers trained on triplet-formatted prompts through loss landscape analysis.<n>We predict the failure of task vectors on representing high-rank mappings and confirm this on practical LLMs.
arXiv Detail & Related papers (2025-06-10T17:59:31Z) - From Compression to Expansion: A Layerwise Analysis of In-Context Learning [20.64102133977965]
In-context learning (ICL) enables large language models to adapt to new tasks without weight updates by learning from demonstration sequences.<n>We conduct a statistical geometric analysis of ICL representations to investigate how task-specific information is captured across layers.<n>Our findings reveal an intriguing layerwise dynamic in ICL, highlight how structured representations emerge within LLMs, and showcase that analyzing internal representations can facilitate a deeper understanding of model behavior.
arXiv Detail & Related papers (2025-05-22T22:22:03Z) - A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems [93.8285345915925]
Reasoning is a fundamental cognitive process that enables logical inference, problem-solving, and decision-making.<n>With the rapid advancement of large language models (LLMs), reasoning has emerged as a key capability that distinguishes advanced AI systems.<n>We categorize existing methods along two dimensions: (1) Regimes, which define the stage at which reasoning is achieved; and (2) Architectures, which determine the components involved in the reasoning process.
arXiv Detail & Related papers (2025-04-12T01:27:49Z) - How do Large Language Models Understand Relevance? A Mechanistic Interpretability Perspective [64.00022624183781]
Large language models (LLMs) can assess relevance and support information retrieval (IR) tasks.<n>We investigate how different LLM modules contribute to relevance judgment through the lens of mechanistic interpretability.
arXiv Detail & Related papers (2025-04-10T16:14:55Z) - Attention Heads of Large Language Models: A Survey [10.136767972375639]
We aim to demystify the internal reasoning processes of Large Language Models (LLMs) by systematically exploring the roles and mechanisms of attention heads.<n>We first introduce a novel four-stage framework inspired by the human thought process: Knowledge Recalling, In-Context Identification, Latent Reasoning, and Expression Preparation.<n>We analyze the experimental methodologies used to discover these special heads, dividing them into two categories: Modeling-Free and Modeling-Required methods.
arXiv Detail & Related papers (2024-09-05T17:59:12Z) - Distributional Associations vs In-Context Reasoning: A Study of Feed-forward and Attention Layers [49.80959223722325]
We study the distinction between feed-forward and attention layers in large language models.<n>We find that feed-forward layers tend to learn simple distributional associations such as bigrams, while attention layers focus on in-context reasoning.
arXiv Detail & Related papers (2024-06-05T08:51:08Z) - On Understanding Attention-Based In-Context Learning for Categorical Data [49.40350941996942]
We develop a network composed of attention blocks, with each block employing a self-attention layer followed by a cross-attention layer, with associated skip connections.<n>This model can exactly perform multi-step functional GD inference for in-context inference with categorical observations.
arXiv Detail & Related papers (2024-05-27T15:03:21Z) - Dual Contrastive Learning for General Face Forgery Detection [64.41970626226221]
We propose a novel face forgery detection framework, named Dual Contrastive Learning (DCL), which constructs positive and negative paired data.
To explore the essential discrepancies, Intra-Instance Contrastive Learning (Intra-ICL) is introduced to focus on the local content inconsistencies prevalent in the forged faces.
arXiv Detail & Related papers (2021-12-27T05:44:40Z) - Structure-Aware Feature Generation for Zero-Shot Learning [108.76968151682621]
We introduce a novel structure-aware feature generation scheme, termed as SA-GAN, to account for the topological structure in learning both the latent space and the generative networks.
Our method significantly enhances the generalization capability on unseen-classes and consequently improve the classification performance.
arXiv Detail & Related papers (2021-08-16T11:52:08Z) - DisenE: Disentangling Knowledge Graph Embeddings [33.169388832519]
DisenE is an end-to-end framework to learn disentangled knowledge graph embeddings.
We introduce an attention-based mechanism that enables the model to explicitly focus on relevant components of entity embeddings according to a given relation.
arXiv Detail & Related papers (2020-10-28T03:45:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.