Revisiting LRP: Positional Attribution as the Missing Ingredient for Transformer Explainability
- URL: http://arxiv.org/abs/2506.02138v1
- Date: Mon, 02 Jun 2025 18:07:55 GMT
- Title: Revisiting LRP: Positional Attribution as the Missing Ingredient for Transformer Explainability
- Authors: Yarden Bakish, Itamar Zimerman, Hila Chefer, Lior Wolf,
- Abstract summary: Layer-wise relevance propagation is one of the most promising approaches to explainability in deep learning.<n>We propose specialized theoretically-grounded LRP rules designed to propagate attributions across various positional encoding methods.<n>Our method significantly outperforms the state-of-the-art in both vision and NLP explainability tasks.
- Score: 53.21677928601684
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of effective explainability tools for Transformers is a crucial pursuit in deep learning research. One of the most promising approaches in this domain is Layer-wise Relevance Propagation (LRP), which propagates relevance scores backward through the network to the input space by redistributing activation values based on predefined rules. However, existing LRP-based methods for Transformer explainability entirely overlook a critical component of the Transformer architecture: its positional encoding (PE), resulting in violation of the conservation property, and the loss of an important and unique type of relevance, which is also associated with structural and positional features. To address this limitation, we reformulate the input space for Transformer explainability as a set of position-token pairs. This allows us to propose specialized theoretically-grounded LRP rules designed to propagate attributions across various positional encoding methods, including Rotary, Learnable, and Absolute PE. Extensive experiments with both fine-tuned classifiers and zero-shot foundation models, such as LLaMA 3, demonstrate that our method significantly outperforms the state-of-the-art in both vision and NLP explainability tasks. Our code is publicly available.
Related papers
- Prototypical Progressive Alignment and Reweighting for Generalizable Semantic Segmentation [13.24093379138835]
Generalizable semantic segmentation aims to perform well on unseen target domains.<n>Class-wise prototypes serve as domain-invariant cues that benefit generalization due to their stability and semantic consistency.<n>We propose a novel framework for generalizable semantic segmentation: Prototypical Progressive Alignment and Reweighting.
arXiv Detail & Related papers (2025-07-16T06:42:21Z) - PaTH Attention: Position Encoding via Accumulating Householder Transformations [56.32365080761523]
PaTH is a flexible data-dependent position encoding scheme based on accumulated products of Householder transformations.<n>We derive an efficient parallel algorithm for training through exploiting a compact representation of products of Householder matrices.
arXiv Detail & Related papers (2025-05-22T08:36:09Z) - Unpacking Positional Encoding in Transformers: A Spectral Analysis of Content-Position Coupling [10.931433906211534]
Positional encoding (PE) is essential for enabling Transformers to model sequential structure.<n>We present a unified framework that analyzes PE through the spectral properties of Toeplitz and related matrices.<n>We establish explicit content-relative mixing with relative-position Toeplitz signals as a key principle for effective PE design.
arXiv Detail & Related papers (2025-05-19T12:11:13Z) - DAPE V2: Process Attention Score as Feature Map for Length Extrapolation [63.87956583202729]
We conceptualize attention as a feature map and apply the convolution operator to mimic the processing methods in computer vision.
The novel insight, which can be adapted to various attention-related models, reveals that the current Transformer architecture has the potential for further evolution.
arXiv Detail & Related papers (2024-10-07T07:21:49Z) - AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers [14.147646140595649]
Large Language Models are prone to biased predictions and hallucinations.
achieving faithful attributions for the entirety of a black-box transformer model and maintaining computational efficiency is an unsolved challenge.
arXiv Detail & Related papers (2024-02-08T12:01:24Z) - Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers [71.32827362323205]
We propose a new class of linear Transformers calledLearner-Transformers (Learners)
They incorporate a wide range of relative positional encoding mechanisms (RPEs)
These include regular RPE techniques applied for sequential data, as well as novel RPEs operating on geometric data embedded in higher-dimensional Euclidean spaces.
arXiv Detail & Related papers (2023-02-03T18:57:17Z) - Your Transformer May Not be as Powerful as You Expect [88.11364619182773]
We mathematically analyze the power of RPE-based Transformers regarding whether the model is capable of approximating any continuous sequence-to-sequence functions.
We present a negative result by showing there exist continuous sequence-to-sequence functions that RPE-based Transformers cannot approximate no matter how deep and wide the neural network is.
We develop a novel attention module, called Universal RPE-based (URPE) Attention, which satisfies the conditions.
arXiv Detail & Related papers (2022-05-26T14:51:30Z) - XAI for Transformers: Better Explanations through Conservative
Propagation [60.67748036747221]
We show that the gradient in a Transformer reflects the function only locally, and thus fails to reliably identify the contribution of input features to the prediction.
Our proposal can be seen as a proper extension of the well-established LRP method to Transformers.
arXiv Detail & Related papers (2022-02-15T10:47:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.