Robustifying Token Attention for Vision Transformers
- URL: http://arxiv.org/abs/2303.11126v3
- Date: Wed, 6 Sep 2023 11:09:26 GMT
- Title: Robustifying Token Attention for Vision Transformers
- Authors: Yong Guo, David Stutz, Bernt Schiele
- Abstract summary: Vision transformers (ViTs) still suffer from significant drops in accuracy in the presence of common corruptions.
We propose two techniques to make attention more stable through two general techniques.
First, our Token-aware Average Pooling (TAP) module encourages the local neighborhood of each token to take part in the attention mechanism.
Second, we force the output tokens to aggregate information from a diverse set of input tokens rather than focusing on just a few.
- Score: 72.07710236246285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the success of vision transformers (ViTs), they still suffer from
significant drops in accuracy in the presence of common corruptions, such as
noise or blur. Interestingly, we observe that the attention mechanism of ViTs
tends to rely on few important tokens, a phenomenon we call token overfocusing.
More critically, these tokens are not robust to corruptions, often leading to
highly diverging attention patterns. In this paper, we intend to alleviate this
overfocusing issue and make attention more stable through two general
techniques: First, our Token-aware Average Pooling (TAP) module encourages the
local neighborhood of each token to take part in the attention mechanism.
Specifically, TAP learns average pooling schemes for each token such that the
information of potentially important tokens in the neighborhood can adaptively
be taken into account. Second, we force the output tokens to aggregate
information from a diverse set of input tokens rather than focusing on just a
few by using our Attention Diversification Loss (ADL). We achieve this by
penalizing high cosine similarity between the attention vectors of different
tokens. In experiments, we apply our methods to a wide range of transformer
architectures and improve robustness significantly. For example, we improve
corruption robustness on ImageNet-C by 2.4% while improving accuracy by 0.4%
based on state-of-the-art robust architecture FAN. Also, when fine-tuning on
semantic segmentation tasks, we improve robustness on CityScapes-C by 2.4% and
ACDC by 3.0%. Our code is available at https://github.com/guoyongcs/TAPADL.
Related papers
- A2SF: Accumulative Attention Scoring with Forgetting Factor for Token Pruning in Transformer Decoder [1.6114012813668932]
We propose Accumulative Attention Score with Forgetting Factor (A2SF) technique, which introduces a Forgetting Factor in the Attention Score accumulation process.
A2SF applies a penalty to the past Attention Score generated from old tokens by repeatedly multiplying the Forgetting Factor to the Attention Score over time.
We have verified the accuracy improvement through A2SF in the OPT and LLaMA models and A2SF improves the accuracy of LLaMA 2 by up to 7.8% and 5.1% on 1-shot and 0-shot.
arXiv Detail & Related papers (2024-07-30T01:13:42Z) - ToSA: Token Selective Attention for Efficient Vision Transformers [50.13756218204456]
ToSA is a token selective attention approach that can identify tokens that need to be attended as well as those that can skip a transformer layer.
We show that ToSA can significantly reduce computation costs while maintaining accuracy on the ImageNet classification benchmark.
arXiv Detail & Related papers (2024-06-13T05:17:21Z) - LeMeViT: Efficient Vision Transformer with Learnable Meta Tokens for Remote Sensing Image Interpretation [37.72775203647514]
This paper proposes to use learnable meta tokens to formulate sparse tokens, which effectively learn key information and improve inference speed.
By employing Dual Cross-Attention (DCA) in the early stages with dense visual tokens, we obtain the hierarchical architecture LeMeViT with various sizes.
Experimental results in classification and dense prediction tasks show that LeMeViT has a significant $1.7 times$ speedup, fewer parameters, and competitive performance compared to the baseline models.
arXiv Detail & Related papers (2024-05-16T03:26:06Z) - How can objects help action recognition? [74.29564964727813]
We investigate how we can use knowledge of objects to design better video models.
First, we propose an object-guided token sampling strategy that enables us to retain a small fraction of the input tokens.
Second, we propose an object-aware attention module that enriches our feature representation with object information.
arXiv Detail & Related papers (2023-06-20T17:56:16Z) - Efficient Video Action Detection with Token Dropout and Context
Refinement [67.10895416008911]
We propose an end-to-end framework for efficient video action detection (ViTs)
In a video clip, we maintain tokens from its perspective while preserving tokens relevant to actor motions from other frames.
Second, we refine scene context by leveraging remaining tokens for better recognizing actor identities.
arXiv Detail & Related papers (2023-04-17T17:21:21Z) - Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient
Vision Transformers [34.19166698049552]
Vision Transformers (ViT) have shown their competitive advantages performance-wise compared to convolutional neural networks (CNNs)
We propose a novel approach to learn instance-dependent attention patterns, by devising a lightweight connectivity predictor module.
We show that our method reduces 48% to 69% FLOPs of MHSA while the accuracy drop is within 0.4%.
arXiv Detail & Related papers (2023-03-24T02:12:28Z) - Beyond Attentive Tokens: Incorporating Token Importance and Diversity
for Efficient Vision Transformers [32.972945618608726]
Vision transformers have achieved significant improvements on various vision tasks but their quadratic interactions between tokens significantly reduce computational efficiency.
We propose an efficient token decoupling and merging method that can jointly consider the token importance and diversity for token pruning.
Our method can even improve the accuracy of DeiT-T by 0.1% after reducing its FLOPs by 40%.
arXiv Detail & Related papers (2022-11-21T09:57:11Z) - PSViT: Better Vision Transformer via Token Pooling and Attention Sharing [114.8051035856023]
We propose a PSViT: a ViT with token Pooling and attention Sharing to reduce the redundancy.
Experimental results show that the proposed scheme can achieve up to 6.6% accuracy improvement in ImageNet classification.
arXiv Detail & Related papers (2021-08-07T11:30:54Z) - DynamicViT: Efficient Vision Transformers with Dynamic Token
Sparsification [134.9393799043401]
We propose a dynamic token sparsification framework to prune redundant tokens based on the input.
By hierarchically pruning 66% of the input tokens, our method greatly reduces 31%37% FLOPs and improves the throughput by over 40%.
DynamicViT models can achieve very competitive complexity/accuracy trade-offs compared to state-of-the-art CNNs and vision transformers on ImageNet.
arXiv Detail & Related papers (2021-06-03T17:57:41Z) - KVT: k-NN Attention for Boosting Vision Transformers [44.189475770152185]
We propose a sparse attention scheme, dubbed k-NN attention, for boosting vision transformers.
The proposed k-NN attention naturally inherits the local bias of CNNs without introducing convolutional operations.
We verify, both theoretically and empirically, that $k$-NN attention is powerful in distilling noise from input tokens and in speeding up training.
arXiv Detail & Related papers (2021-05-28T06:49:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.