Found in the Middle: Calibrating Positional Attention Bias Improves Long Context Utilization
- URL: http://arxiv.org/abs/2406.16008v2
- Date: Wed, 3 Jul 2024 17:40:00 GMT
- Title: Found in the Middle: Calibrating Positional Attention Bias Improves Long Context Utilization
- Authors: Cheng-Yu Hsieh, Yung-Sung Chuang, Chun-Liang Li, Zifeng Wang, Long T. Le, Abhishek Kumar, James Glass, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister,
- Abstract summary: Large language models (LLMs) struggle to capture relevant information located in the middle of their input.
This phenomenon has been known as the lost-in-the-middle problem.
We show found-in-the-middle achieves better performance in locating relevant information within a long context.
- Score: 97.84156490765457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs), even when specifically trained to process long input contexts, struggle to capture relevant information located in the middle of their input. This phenomenon has been known as the lost-in-the-middle problem. In this work, we make three contributions. First, we set out to understand the factors that cause this phenomenon. In doing so, we establish a connection between lost-in-the-middle to LLMs' intrinsic attention bias: LLMs exhibit a U-shaped attention bias where the tokens at the beginning and at the end of its input receive higher attention, regardless of their relevance. Second, we mitigate this positional bias through a calibration mechanism, found-in-the-middle, that allows the model to attend to contexts faithfully according to their relevance, even though when they are in the middle. Third, we show found-in-the-middle not only achieves better performance in locating relevant information within a long context, but also eventually leads to improved retrieval-augmented generation (RAG) performance across various tasks, outperforming existing methods by up to 15 percentage points. These findings open up future directions in understanding LLM attention bias and its potential consequences.
Related papers
- Eliminating Position Bias of Language Models: A Mechanistic Approach [119.34143323054143]
Position bias has proven to be a prevalent issue of modern language models (LMs)
We find that causal attention generally causes models to favor distant content, while relative positional encodings like RoPE prefer nearby ones.
We propose to ELIMINATE position bias caused by different input segment orders (e.g., options in LM-as-a-judge, retrieved documents in QA) in a TRAINING-FREE ZERO-SHOT manner.
arXiv Detail & Related papers (2024-07-01T09:06:57Z) - Attention Instruction: Amplifying Attention in the Middle via Prompting [35.07098912195063]
Language models still suffer from position bias and have difficulty in accessing and using the middle part of the context.
We examine the relative position awareness of LLMs and the feasibility of mitigating disproportional attention through prompting.
arXiv Detail & Related papers (2024-06-24T19:35:11Z) - Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration [15.36841874118801]
We aim to provide a more profound understanding of the existence of attention sinks within large language models (LLMs)
We propose a training-free Attention Technique (ACT) that automatically optimize the attention distributions on the fly during inference in an input-adaptive manner.
ACT achieves an average improvement of up to 7.30% in accuracy across different datasets when applied to Llama-30B.
arXiv Detail & Related papers (2024-06-22T07:00:43Z) - Insights into LLM Long-Context Failures: When Transformers Know but Don't Tell [14.146413770229392]
Large Language Models (LLMs) exhibit positional bias, struggling to utilize information from the middle or end of long contexts.
We find that while LLMs encode the position of target information, they often fail to leverage this in generating accurate responses.
arXiv Detail & Related papers (2024-06-20T18:50:44Z) - Position-Aware Parameter Efficient Fine-Tuning Approach for Reducing Positional Bias in LLMs [18.832135309689736]
Recent advances in large language models (LLMs) have enhanced their ability to process long input contexts.
Recent studies show a positional bias in LLMs, demonstrating varying performance depending on the location of useful information.
We develop a Position-Aware PAPEFT approach which is composed of a data augmentation technique and an efficient parameter adapter.
arXiv Detail & Related papers (2024-04-01T19:04:17Z) - Found in the Middle: How Language Models Use Long Contexts Better via
Plug-and-Play Positional Encoding [78.36702055076456]
This paper introduces Multi-scale Positional.
(Ms-PoE) which is a simple yet effective plug-and-play approach to enhance the capacity of.
LLMs to handle relevant information located in the middle of the context.
arXiv Detail & Related papers (2024-03-05T04:58:37Z) - Fortify the Shortest Stave in Attention: Enhancing Context Awareness of Large Language Models for Effective Tool Use [74.72150542395487]
An inherent waveform pattern in the attention allocation of large language models (LLMs) significantly affects their performance in tasks demanding a high degree of context awareness.
To address this issue, we propose a novel inference method named Attention Buckets.
arXiv Detail & Related papers (2023-12-07T17:24:51Z) - Never Lost in the Middle: Improving Large Language Models via Attention
Strengthening Question Answering [0.14043931310479374]
Large language models (LLMs) are struggling to seek correct information in long contexts.
This paper proposes to enhance the information searching and reflection ability of LLMs in long contexts via specially designed tasks.
Experimental results show substantial improvement in Multi-doc QA and other benchmarks, superior to state-of-the-art models by 13.7% absolute gain in shuffled settings.
arXiv Detail & Related papers (2023-11-15T18:42:44Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Causal Attention for Unbiased Visual Recognition [76.87114090435618]
Attention module does not always help deep models learn causal features that are robust in any confounding context.
We propose causal attention module (CaaM) that self-annotates the confounders in unsupervised fashion.
In OOD settings, deep models with CaaM outperform those without it significantly.
arXiv Detail & Related papers (2021-08-19T16:45:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.