TokenMix: Rethinking Image Mixing for Data Augmentation in Vision
Transformers
- URL: http://arxiv.org/abs/2207.08409v3
- Date: Wed, 19 Apr 2023 14:30:37 GMT
- Title: TokenMix: Rethinking Image Mixing for Data Augmentation in Vision
Transformers
- Authors: Jihao Liu and Boxiao Liu and Hang Zhou and Hongsheng Li and Yu Liu
- Abstract summary: CutMix is a popular augmentation technique commonly used for training modern convolutional and transformer vision networks.
We propose a novel data augmentation technique TokenMix to improve the performance of vision transformers.
- Score: 36.630476419392046
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: CutMix is a popular augmentation technique commonly used for training modern
convolutional and transformer vision networks. It was originally designed to
encourage Convolution Neural Networks (CNNs) to focus more on an image's global
context instead of local information, which greatly improves the performance of
CNNs. However, we found it to have limited benefits for transformer-based
architectures that naturally have a global receptive field. In this paper, we
propose a novel data augmentation technique TokenMix to improve the performance
of vision transformers. TokenMix mixes two images at token level via
partitioning the mixing region into multiple separated parts. Besides, we show
that the mixed learning target in CutMix, a linear combination of a pair of the
ground truth labels, might be inaccurate and sometimes counter-intuitive. To
obtain a more suitable target, we propose to assign the target score according
to the content-based neural activation maps of the two images from a
pre-trained teacher model, which does not need to have high performance. With
plenty of experiments on various vision transformer architectures, we show that
our proposed TokenMix helps vision transformers focus on the foreground area to
infer the classes and enhances their robustness to occlusion, with consistent
performance gains. Notably, we improve DeiT-T/S/B with +1% ImageNet top-1
accuracy. Besides, TokenMix enjoys longer training, which achieves 81.2% top-1
accuracy on ImageNet with DeiT-S trained for 400 epochs. Code is available at
https://github.com/Sense-X/TokenMix.
Related papers
- SMMix: Self-Motivated Image Mixing for Vision Transformers [65.809376136455]
CutMix is a vital augmentation strategy that determines the performance and generalization ability of vision transformers (ViTs)
Existing CutMix variants tackle this problem by generating more consistent mixed images or more precise mixed labels.
We propose an efficient and effective Self-Motivated image Mixing method (SMMix) which motivates both image and label enhancement by the model under training itself.
arXiv Detail & Related papers (2022-12-26T00:19:39Z) - TokenMixup: Efficient Attention-guided Token-level Data Augmentation for
Transformers [8.099977107670917]
TokenMixup is an efficient attention-guided token-level data augmentation method.
A variant of TokenMixup mixes tokens within a single instance, thereby enabling multi-scale feature augmentation.
Experiments show that our methods significantly improve the baseline models' performance on CIFAR and ImageNet-1K.
arXiv Detail & Related papers (2022-10-14T06:36:31Z) - Token-Label Alignment for Vision Transformers [93.58540411138164]
Data mixing strategies (e.g., CutMix) have shown the ability to greatly improve the performance of convolutional neural networks (CNNs)
We identify a token fluctuation phenomenon that has suppressed the potential of data mixing strategies.
We propose a token-label alignment (TL-Align) method to trace the correspondence between transformed tokens and the original tokens to maintain a label for each token.
arXiv Detail & Related papers (2022-10-12T17:54:32Z) - Adaptive Split-Fusion Transformer [90.04885335911729]
We propose an Adaptive Split-Fusion Transformer (ASF-former) to treat convolutional and attention branches differently with adaptive weights.
Experiments on standard benchmarks, such as ImageNet-1K, show that our ASF-former outperforms its CNN, transformer counterparts, and hybrid pilots in terms of accuracy.
arXiv Detail & Related papers (2022-04-26T10:00:28Z) - Convolutional Xformers for Vision [2.7188347260210466]
Vision transformers (ViTs) have found only limited practical use in processing images, in spite of their state-of-the-art accuracy on certain benchmarks.
The reason for their limited use include their need for larger training datasets and more computational resources compared to convolutional neural networks (CNNs)
We propose a linear attention-convolution hybrid architecture -- Convolutional X-formers for Vision (CXV) -- to overcome these limitations.
We replace the quadratic attention with linear attention mechanisms, such as Performer, Nystr"omformer, and Linear Transformer, to reduce its GPU usage.
arXiv Detail & Related papers (2022-01-25T12:32:09Z) - UniFormer: Unifying Convolution and Self-attention for Visual
Recognition [69.68907941116127]
Convolution neural networks (CNNs) and vision transformers (ViTs) have been two dominant frameworks in the past few years.
We propose a novel Unified transFormer (UniFormer) which seamlessly integrates the merits of convolution and self-attention in a concise transformer format.
Our UniFormer achieves 86.3 top-1 accuracy on ImageNet-1K classification.
arXiv Detail & Related papers (2022-01-24T04:39:39Z) - Token Labeling: Training a 85.4% Top-1 Accuracy Vision Transformer with
56M Parameters on ImageNet [86.95679590801494]
We explore the potential of vision transformers in ImageNet classification by developing a bag of training techniques.
We show that by slightly tuning the structure of vision transformers and introducing token labeling, our models are able to achieve better results than the CNN counterparts.
arXiv Detail & Related papers (2021-04-22T04:43:06Z) - CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image
Classification [17.709880544501758]
We propose a dual-branch transformer to combine image patches of different sizes to produce stronger image features.
Our approach processes small-patch and large-patch tokens with two separate branches of different computational complexity.
Our proposed cross-attention only requires linear time for both computational and memory complexity instead of quadratic time otherwise.
arXiv Detail & Related papers (2021-03-27T13:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.