Eliminating Position Bias of Language Models: A Mechanistic Approach
- URL: http://arxiv.org/abs/2407.01100v2
- Date: Wed, 02 Oct 2024 17:09:53 GMT
- Title: Eliminating Position Bias of Language Models: A Mechanistic Approach
- Authors: Ziqi Wang, Hanlin Zhang, Xiner Li, Kuan-Hao Huang, Chi Han, Shuiwang Ji, Sham M. Kakade, Hao Peng, Heng Ji,
- Abstract summary: Position bias has proven to be a prevalent issue of modern language models (LMs)
Our mechanistic analysis attributes the position bias to two components employed in nearly all state-of-the-art LMs: causal attention and relative positional encodings.
By eliminating position bias, models achieve better performance and reliability in downstream tasks, including LM-as-a-judge, retrieval-augmented QA, molecule generation, and math reasoning.
- Score: 119.34143323054143
- License:
- Abstract: Position bias has proven to be a prevalent issue of modern language models (LMs), where the models prioritize content based on its position within the given context. This bias often leads to unexpected model failures and hurts performance, robustness, and reliability across various applications. Our mechanistic analysis attributes the position bias to two components employed in nearly all state-of-the-art LMs: causal attention and relative positional encodings. Based on the analyses, we propose to eliminate position bias (e.g., different retrieved documents' orders in QA affect performance) with a training-free zero-shot approach. Our method changes the causal attention to bidirectional attention between documents and utilizes model attention values to decide the relative orders of documents instead of using the order provided in input prompts, therefore enabling Position-INvariant inferencE (PINE) at the document level. By eliminating position bias, models achieve better performance and reliability in downstream tasks, including LM-as-a-judge, retrieval-augmented QA, molecule generation, and math reasoning. Notably, PINE is especially useful when adapting LMs for evaluating reasoning pairs: it consistently provides 8 to 10 percentage points performance gains, making Llama-3-70B-Instruct perform even better than GPT-4-0125-preview and GPT-4o-2024-08-06 on the RewardBench reasoning set.
Related papers
- Mitigate Position Bias in Large Language Models via Scaling a Single Dimension [47.792435921037274]
This paper first explores the micro-level manifestations of position bias, concluding that attention weights are a micro-level expression of position bias.
It further identifies that, in addition to position embeddings, causal attention mask also contributes to position bias by creating position-specific hidden states.
Based on these insights, we propose a method to mitigate position bias by scaling this positional hidden states.
arXiv Detail & Related papers (2024-06-04T17:55:38Z) - Evaluating Generative Language Models in Information Extraction as Subjective Question Correction [49.729908337372436]
We propose a new evaluation method, SQC-Score.
Inspired by the principles in subjective question correction, we propose a new evaluation method, SQC-Score.
Results on three information extraction tasks show that SQC-Score is more preferred by human annotators than the baseline metrics.
arXiv Detail & Related papers (2024-04-04T15:36:53Z) - Position-Aware Parameter Efficient Fine-Tuning Approach for Reducing Positional Bias in LLMs [18.832135309689736]
Recent advances in large language models (LLMs) have enhanced their ability to process long input contexts.
Recent studies show a positional bias in LLMs, demonstrating varying performance depending on the location of useful information.
We develop a Position-Aware PAPEFT approach which is composed of a data augmentation technique and an efficient parameter adapter.
arXiv Detail & Related papers (2024-04-01T19:04:17Z) - Technical Report: Impact of Position Bias on Language Models in Token Classification [0.6372911857214884]
Downstream tasks such as Named Entity Recognition (NER) or Part-of-Speech (POS) tagging are known to suffer from data imbalance issues.
This paper investigates an often-overlooked issue of encoder models, specifically the position bias of positive examples in token classification tasks.
We show that LMs can suffer from this bias with an average drop ranging from 3% to 9% in their performance.
arXiv Detail & Related papers (2023-04-26T13:57:25Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - Look to the Right: Mitigating Relative Position Bias in Extractive
Question Answering [38.36299280464046]
Extractive question answering (QA) models tend to exploit spurious correlations to make predictions.
Relative position of an answer can be exploited by QA models as superficial cues for making predictions.
We propose an ensemble-based debiasing method that does not require prior knowledge about the distribution of relative positions.
arXiv Detail & Related papers (2022-10-26T08:01:38Z) - The Curious Case of Absolute Position Embeddings [65.13827063579728]
Transformer language models encode the notion of word order using positional information.
In natural language, it is not absolute position that matters, but relative position, and the extent to which APEs can capture this type of information has not been investigated.
We observe that models trained with APE over-rely on positional information to the point that they break-down when subjected to sentences with shifted position information.
arXiv Detail & Related papers (2022-10-23T00:00:04Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - The Case for Translation-Invariant Self-Attention in Transformer-Based
Language Models [11.148662334602639]
We analyze the position embeddings of existing language models and find strong evidence of translation invariance.
We propose translation-invariant self-attention (TISA), which accounts for the relative position between tokens in an interpretable fashion.
arXiv Detail & Related papers (2021-06-03T15:56:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.