Progressive Multi-stage Interactive Training in Mobile Network for
Fine-grained Recognition
- URL: http://arxiv.org/abs/2112.04223v1
- Date: Wed, 8 Dec 2021 10:50:03 GMT
- Title: Progressive Multi-stage Interactive Training in Mobile Network for
Fine-grained Recognition
- Authors: Zhenxin Wu, Qingliang Chen, Yifeng Liu, Yinqi Zhang, Chengkai Zhu,
Yang Yu
- Abstract summary: We propose a Progressive Multi-Stage Interactive training method with a Recursive Mosaic Generator (RMG-PMSI)
First, we propose a Recursive Mosaic Generator (RMG) that generates images with different granularities in different phases.
Then, the features of different stages pass through a Multi-Stage Interaction (MSI) module, which strengthens and complements the corresponding features of different stages.
Experiments on three prestigious fine-grained benchmarks show that RMG-PMSI can significantly improve the performance with good robustness and transferability.
- Score: 8.727216421226814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fine-grained Visual Classification (FGVC) aims to identify objects from
subcategories. It is a very challenging task because of the subtle inter-class
differences. Existing research applies large-scale convolutional neural
networks or visual transformers as the feature extractor, which is extremely
computationally expensive. In fact, real-world scenarios of fine-grained
recognition often require a more lightweight mobile network that can be
utilized offline. However, the fundamental mobile network feature extraction
capability is weaker than large-scale models. In this paper, based on the
lightweight MobilenetV2, we propose a Progressive Multi-Stage Interactive
training method with a Recursive Mosaic Generator (RMG-PMSI). First, we propose
a Recursive Mosaic Generator (RMG) that generates images with different
granularities in different phases. Then, the features of different stages pass
through a Multi-Stage Interaction (MSI) module, which strengthens and
complements the corresponding features of different stages. Finally, using the
progressive training (P), the features extracted by the model in different
stages can be fully utilized and fused with each other. Experiments on three
prestigious fine-grained benchmarks show that RMG-PMSI can significantly
improve the performance with good robustness and transferability.
Related papers
- Prototype-Driven Multi-Feature Generation for Visible-Infrared Person Re-identification [11.664820595258988]
Primary challenges in visible-infrared person re-identification arise from the differences between visible (vis) and infrared (ir) images.
Existing methods often rely on horizontal partitioning to align part-level features, which can introduce inaccuracies.
We propose a novel Prototype-Driven Multi-feature generation framework (PDM) aimed at mitigating cross-modal discrepancies.
arXiv Detail & Related papers (2024-09-09T14:12:23Z) - SEED-X: Multimodal Models with Unified Multi-granularity Comprehension and Generation [61.392147185793476]
We present a unified and versatile foundation model, namely, SEED-X.
SEED-X is able to model multi-granularity visual semantics for comprehension and generation tasks.
We hope that our work will inspire future research into what can be achieved by versatile multimodal foundation models in real-world applications.
arXiv Detail & Related papers (2024-04-22T17:56:09Z) - Multilinear Operator Networks [60.7432588386185]
Polynomial Networks is a class of models that does not require activation functions.
We propose MONet, which relies solely on multilinear operators.
arXiv Detail & Related papers (2024-01-31T16:52:19Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Multimodal Fusion Transformer for Remote Sensing Image Classification [35.57881383390397]
Vision transformers (ViTs) have been trending in image classification tasks due to their promising performance when compared to convolutional neural networks (CNNs)
To achieve satisfactory performance, close to that of CNNs, transformers need fewer parameters.
We introduce a new multimodal fusion transformer (MFT) network which comprises a multihead cross patch attention (mCrossPA) for HSI land-cover classification.
arXiv Detail & Related papers (2022-03-31T11:18:41Z) - Multi-scale and Cross-scale Contrastive Learning for Semantic
Segmentation [5.281694565226513]
We apply contrastive learning to enhance the discriminative power of the multi-scale features extracted by semantic segmentation networks.
By first mapping the encoder's multi-scale representations to a common feature space, we instantiate a novel form of supervised local-global constraint.
arXiv Detail & Related papers (2022-03-25T01:24:24Z) - ViTAEv2: Vision Transformer Advanced by Exploring Inductive Bias for
Image Recognition and Beyond [76.35955924137986]
We propose a Vision Transformer Advanced by Exploring intrinsic IB from convolutions, i.e., ViTAE.
ViTAE has several spatial pyramid reduction modules to downsample and embed the input image into tokens with rich multi-scale context.
We obtain the state-of-the-art classification performance, i.e., 88.5% Top-1 classification accuracy on ImageNet validation set and the best 91.2% Top-1 accuracy on ImageNet real validation set.
arXiv Detail & Related papers (2022-02-21T10:40:05Z) - (M)SLAe-Net: Multi-Scale Multi-Level Attention embedded Network for
Retinal Vessel Segmentation [0.0]
We propose a multi-scale, multi-level attention embedded CNN architecture ((M)SLAe-Net) to address the issue of multi-stage processing.
We do this by extracting features at multiple scales and multiple levels of the network, enabling our model to holistically extracts the local and global features.
Our unique network design and novel D-DPP module with efficient task-specific loss function for thin vessels enabled our model for better cross data performance.
arXiv Detail & Related papers (2021-09-05T14:29:00Z) - ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias [76.16156833138038]
We propose a novel Vision Transformer Advanced by Exploring intrinsic IB from convolutions, ie, ViTAE.
ViTAE has several spatial pyramid reduction modules to downsample and embed the input image into tokens with rich multi-scale context.
In each transformer layer, ViTAE has a convolution block in parallel to the multi-head self-attention module, whose features are fused and fed into the feed-forward network.
arXiv Detail & Related papers (2021-06-07T05:31:06Z) - MGML: Multi-Granularity Multi-Level Feature Ensemble Network for Remote
Sensing Scene Classification [15.856162817494726]
We propose a Multi-granularity Multi-Level Feature Ensemble Network (MGML-FENet) to efficiently tackle RS scene classification task.
We show that our proposed networks achieve better performance than previous state-of-the-art (SOTA) networks.
arXiv Detail & Related papers (2020-12-29T02:18:11Z) - Fine-Grained Visual Classification via Progressive Multi-Granularity
Training of Jigsaw Patches [67.51747235117]
Fine-grained visual classification (FGVC) is much more challenging than traditional classification tasks.
Recent works mainly tackle this problem by focusing on how to locate the most discriminative parts.
We propose a novel framework for fine-grained visual classification to tackle these problems.
arXiv Detail & Related papers (2020-03-08T19:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.