ViT-FOD: A Vision Transformer based Fine-grained Object Discriminator
- URL: http://arxiv.org/abs/2203.12816v1
- Date: Thu, 24 Mar 2022 02:34:57 GMT
- Title: ViT-FOD: A Vision Transformer based Fine-grained Object Discriminator
- Authors: Zi-Chao Zhang, Zhen-Duo Chen, Yongxin Wang, Xin Luo, Xin-Shun Xu
- Abstract summary: We propose a novel ViT based fine-grained object discriminator for Fine-Grained Visual Classification (FGVC) tasks.
Besides a ViT backbone, it introduces three novel components, i.e. Attention Patch Combination (APC), Critical Regions Filter (CRF) and Complementary Tokens Integration (CTI)
We conduct comprehensive experiments on widely used datasets and the results demonstrate that ViT-FOD is able to achieve state-of-the-art performance.
- Score: 21.351034332423374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, several Vision Transformer (ViT) based methods have been proposed
for Fine-Grained Visual Classification (FGVC).These methods significantly
surpass existing CNN-based ones, demonstrating the effectiveness of ViT in FGVC
tasks.However, there are some limitations when applying ViT directly to
FGVC.First, ViT needs to split images into patches and calculate the attention
of every pair, which may result in heavy redundant calculation and unsatisfying
performance when handling fine-grained images with complex background and small
objects.Second, a standard ViT only utilizes the class token in the final layer
for classification, which is not enough to extract comprehensive fine-grained
information. To address these issues, we propose a novel ViT based fine-grained
object discriminator for FGVC tasks, ViT-FOD for short. Specifically, besides a
ViT backbone, it further introduces three novel components, i.e, Attention
Patch Combination (APC), Critical Regions Filter (CRF), and Complementary
Tokens Integration (CTI). Thereinto, APC pieces informative patches from two
images to generate a new image so that the redundant calculation can be
reduced. CRF emphasizes tokens corresponding to discriminative regions to
generate a new class token for subtle feature learning. To extract
comprehensive information, CTI integrates complementary information captured by
class tokens in different ViT layers. We conduct comprehensive experiments on
widely used datasets and the results demonstrate that ViT-FOD is able to
achieve state-of-the-art performance.
Related papers
- LookupViT: Compressing visual information to a limited number of tokens [36.83826969693139]
Vision Transformers (ViT) have emerged as the de-facto choice for numerous industry grade vision solutions.
But their inference cost can be prohibitive for many settings, as they compute self-attention in each layer which suffers from complexity in the number of tokens.
In this work, we introduce LookupViT, that exploits this information sparsity to reduce ViT inference cost.
arXiv Detail & Related papers (2024-07-17T17:22:43Z) - DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets [30.178427266135756]
Vision Transformer (ViT) has emerged as a prominent architecture for various computer vision tasks.
ViT requires a large amount of data for pre-training.
We introduce DeiT-LT to tackle the problem of training ViTs from scratch on long-tailed datasets.
arXiv Detail & Related papers (2024-04-03T17:58:21Z) - Denoising Vision Transformers [43.03068202384091]
We propose a two-stage denoising approach, termed Denoising Vision Transformers (DVT)
In the first stage, we separate the clean features from those contaminated by positional artifacts by enforcing cross-view feature consistency with neural fields on a per-image basis.
In the second stage, we train a lightweight transformer block to predict clean features from raw ViT outputs, leveraging the derived estimates of the clean features as supervision.
arXiv Detail & Related papers (2024-01-05T18:59:52Z) - Making Vision Transformers Efficient from A Token Sparsification View [26.42498120556985]
We propose a novel Semantic Token ViT (STViT) for efficient global and local vision transformers.
Our method can achieve competitive results compared to the original networks in object detection and instance segmentation, with over 30% FLOPs reduction for backbone.
In addition, we design a STViT-R(ecover) network to restore the detailed spatial information based on the STViT, making it work for downstream tasks.
arXiv Detail & Related papers (2023-03-15T15:12:36Z) - Exploring Efficient Few-shot Adaptation for Vision Transformers [70.91692521825405]
We propose a novel efficient Transformer Tuning (eTT) method that facilitates finetuning ViTs in the Few-shot Learning tasks.
Key novelties come from the newly presented Attentive Prefix Tuning (APT) and Domain Residual Adapter (DRA)
We conduct extensive experiments to show the efficacy of our model.
arXiv Detail & Related papers (2023-01-06T08:42:05Z) - Self-Promoted Supervision for Few-Shot Transformer [178.52948452353834]
Self-promoted sUpervisioN (SUN) is a few-shot learning framework for vision transformers (ViTs)
SUN pretrains the ViT on the few-shot learning dataset and then uses it to generate individual location-specific supervision for guiding each patch token.
Experiments show that SUN using ViTs significantly surpasses other few-shot learning frameworks with ViTs and is the first one that achieves higher performance than those CNN state-of-the-arts.
arXiv Detail & Related papers (2022-03-14T12:53:27Z) - Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain
Analysis: From Theory to Practice [111.47461527901318]
Vision Transformer (ViT) has recently demonstrated promise in computer vision problems.
ViT saturates quickly with depth increasing, due to the observed attention collapse or patch uniformity.
We propose two techniques to mitigate the undesirable low-pass limitation.
arXiv Detail & Related papers (2022-03-09T23:55:24Z) - Coarse-to-Fine Vision Transformer [83.45020063642235]
We propose a coarse-to-fine vision transformer (CF-ViT) to relieve computational burden while retaining performance.
Our proposed CF-ViT is motivated by two important observations in modern ViT models.
Our CF-ViT reduces 53% FLOPs of LV-ViT, and also achieves 2.01x throughput.
arXiv Detail & Related papers (2022-03-08T02:57:49Z) - On Improving Adversarial Transferability of Vision Transformers [97.17154635766578]
Vision transformers (ViTs) process input images as sequences of patches via self-attention.
We study the adversarial feature space of ViT models and their transferability.
We introduce two novel strategies specific to the architecture of ViT models.
arXiv Detail & Related papers (2021-06-08T08:20:38Z) - So-ViT: Mind Visual Tokens for Vision Transformer [27.243241133304785]
We propose a new classification paradigm, where the second-order, cross-covariance pooling of visual tokens is combined with class token for final classification.
We develop a light-weight, hierarchical module based on off-the-shelf convolutions for visual token embedding.
The results show our models, when trained from scratch, outperform the competing ViT variants, while being on par with or better than state-of-the-art CNN models.
arXiv Detail & Related papers (2021-04-22T09:05:09Z) - Tokens-to-Token ViT: Training Vision Transformers from Scratch on
ImageNet [128.96032932640364]
We propose a new Tokens-To-Token Vision Transformers (T2T-ViT) to solve vision tasks.
T2T-ViT reduces the parameter counts and MACs of vanilla ViT by 200%, while achieving more than 2.5% improvement when trained from scratch on ImageNet.
For example, T2T-ViT with ResNet50 comparable size can achieve 80.7% top-1 accuracy on ImageNet.
arXiv Detail & Related papers (2021-01-28T13:25:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.