Attention that does not Explain Away
- URL: http://arxiv.org/abs/2009.14308v1
- Date: Tue, 29 Sep 2020 21:05:39 GMT
- Title: Attention that does not Explain Away
- Authors: Nan Ding, Xinjie Fan, Zhenzhong Lan, Dale Schuurmans, Radu Soricut
- Abstract summary: Models based on the Transformer architecture have achieved better accuracy than the ones based on competing architectures for a large set of tasks.
A unique feature of the Transformer is its universal application of a self-attention mechanism, which allows for free information flow at arbitrary distances.
We propose a doubly-normalized attention scheme that is simple to implement and provides theoretical guarantees for avoiding the "explaining away" effect.
- Score: 54.42960937271612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Models based on the Transformer architecture have achieved better accuracy
than the ones based on competing architectures for a large set of tasks. A
unique feature of the Transformer is its universal application of a
self-attention mechanism, which allows for free information flow at arbitrary
distances. Following a probabilistic view of the attention via the Gaussian
mixture model, we find empirical evidence that the Transformer attention tends
to "explain away" certain input neurons. To compensate for this, we propose a
doubly-normalized attention scheme that is simple to implement and provides
theoretical guarantees for avoiding the "explaining away" effect without
introducing significant computational or memory cost. Empirically, we show that
the new attention schemes result in improved performance on several well-known
benchmarks.
Related papers
- Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - FAST: Factorizable Attention for Speeding up Transformers [1.3637227185793512]
We present a linearly scaled attention mechanism that maintains the full representation of the attention matrix without compromising on sparsification.
Results indicate that our attention mechanism has a robust performance and holds significant promise for diverse applications where self-attention is used.
arXiv Detail & Related papers (2024-02-12T18:59:39Z) - Correlated Attention in Transformers for Multivariate Time Series [22.542109523780333]
We propose a novel correlated attention mechanism, which efficiently captures feature-wise dependencies, and can be seamlessly integrated within the encoder blocks of existing Transformers.
In particular, correlated attention operates across feature channels to compute cross-covariance matrices between queries and keys with different lag values, and selectively aggregate representations at the sub-series level.
This architecture facilitates automated discovery and representation learning of not only instantaneous but also lagged cross-correlations, while inherently capturing time series auto-correlation.
arXiv Detail & Related papers (2023-11-20T17:35:44Z) - AttMEMO : Accelerating Transformers with Memoization on Big Memory
Systems [10.585040856070941]
We introduce a novel embedding technique to find semantically similar inputs to identify computation similarity.
We enable 22% inference-latency reduction on average (up to 68%) with negligible loss in inference accuracy.
arXiv Detail & Related papers (2023-01-23T04:24:26Z) - Convexifying Transformers: Improving optimization and understanding of
transformer networks [56.69983975369641]
We study the training problem of attention/transformer networks and introduce a novel convex analytic approach.
We first introduce a convex alternative to the self-attention mechanism and reformulate the regularized training problem of transformer networks.
As a byproduct of our convex analysis, we reveal an implicit regularization mechanism, which promotes sparsity across tokens.
arXiv Detail & Related papers (2022-11-20T18:17:47Z) - How Much Does Attention Actually Attend? Questioning the Importance of
Attention in Pretrained Transformers [59.57128476584361]
We introduce PAPA, a new probing method that replaces the input-dependent attention matrices with constant ones.
We find that without any input-dependent attention, all models achieve competitive performance.
We show that better-performing models lose more from applying our method than weaker models, suggesting that the utilization of the input-dependent attention mechanism might be a factor in their success.
arXiv Detail & Related papers (2022-11-07T12:37:54Z) - XAI for Transformers: Better Explanations through Conservative
Propagation [60.67748036747221]
We show that the gradient in a Transformer reflects the function only locally, and thus fails to reliably identify the contribution of input features to the prediction.
Our proposal can be seen as a proper extension of the well-established LRP method to Transformers.
arXiv Detail & Related papers (2022-02-15T10:47:11Z) - Couplformer:Rethinking Vision Transformer with Coupling Attention Map [7.789667260916264]
The Transformer model has demonstrated its outstanding performance in the computer vision domain.
We propose a novel memory economy attention mechanism named Couplformer, which decouples the attention map into two sub-matrices.
Experiments show that the Couplformer can significantly decrease 28% memory consumption compared with regular Transformer.
arXiv Detail & Related papers (2021-12-10T10:05:35Z) - Adaptive Multi-Resolution Attention with Linear Complexity [18.64163036371161]
We propose a novel structure named Adaptive Multi-Resolution Attention (AdaMRA) for short.
We leverage a multi-resolution multi-head attention mechanism, enabling attention heads to capture long-range contextual information in a coarse-to-fine fashion.
To facilitate AdaMRA utilization by the scientific community, the code implementation will be made publicly available.
arXiv Detail & Related papers (2021-08-10T23:17:16Z) - Bayesian Attention Belief Networks [59.183311769616466]
Attention-based neural networks have achieved state-of-the-art results on a wide range of tasks.
This paper introduces Bayesian attention belief networks, which construct a decoder network by modeling unnormalized attention weights.
We show that our method outperforms deterministic attention and state-of-the-art attention in accuracy, uncertainty estimation, generalization across domains, and adversarial attacks.
arXiv Detail & Related papers (2021-06-09T17:46:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.