Augmented Bi-path Network for Few-shot Learning
- URL: http://arxiv.org/abs/2007.07614v1
- Date: Wed, 15 Jul 2020 11:13:38 GMT
- Title: Augmented Bi-path Network for Few-shot Learning
- Authors: Baoming Yan, Chen Zhou, Bo Zhao, Kan Guo, Jiang Yang, Xiaobo Li, Ming
Zhang, Yizhou Wang
- Abstract summary: We propose Augmented Bi-path Network (ABNet) for learning to compare both global and local features on multi-scales.
Specifically, the salient patches are extracted and embedded as the local features for every image. Then, the model learns to augment the features for better robustness.
- Score: 16.353228724916505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot Learning (FSL) which aims to learn from few labeled training data is
becoming a popular research topic, due to the expensive labeling cost in many
real-world applications. One kind of successful FSL method learns to compare
the testing (query) image and training (support) image by simply concatenating
the features of two images and feeding it into the neural network. However,
with few labeled data in each class, the neural network has difficulty in
learning or comparing the local features of two images. Such simple image-level
comparison may cause serious mis-classification. To solve this problem, we
propose Augmented Bi-path Network (ABNet) for learning to compare both global
and local features on multi-scales. Specifically, the salient patches are
extracted and embedded as the local features for every image. Then, the model
learns to augment the features for better robustness. Finally, the model learns
to compare global and local features separately, i.e., in two paths, before
merging the similarities. Extensive experiments show that the proposed ABNet
outperforms the state-of-the-art methods. Both quantitative and visual ablation
studies are provided to verify that the proposed modules lead to more precise
comparison results.
Related papers
- Unified Contrastive Learning in Image-Text-Label Space [130.31947133453406]
Unified Contrastive Learning (UniCL) is effective way of learning semantically rich yet discriminative representations.
UniCL stand-alone is a good learner on pure imagelabel data, rivaling the supervised learning methods across three image classification datasets.
arXiv Detail & Related papers (2022-04-07T17:34:51Z) - Compare learning: bi-attention network for few-shot learning [6.559037166322981]
One of the Few-shot learning methods called metric learning addresses this challenge by first learning a deep distance metric to determine whether a pair of images belong to the same category.
In this paper, we propose a novel approach named Bi-attention network to compare the instances, which can measure the similarity between embeddings of instances precisely, globally and efficiently.
arXiv Detail & Related papers (2022-03-25T07:39:10Z) - Learning Contrastive Representation for Semantic Correspondence [150.29135856909477]
We propose a multi-level contrastive learning approach for semantic matching.
We show that image-level contrastive learning is a key component to encourage the convolutional features to find correspondence between similar objects.
arXiv Detail & Related papers (2021-09-22T18:34:14Z) - Remote Sensing Images Semantic Segmentation with General Remote Sensing
Vision Model via a Self-Supervised Contrastive Learning Method [13.479068312825781]
We propose Global style and Local matching Contrastive Learning Network (GLCNet) for remote sensing semantic segmentation.
Specifically, the global style contrastive module is used to learn an image-level representation better.
The local features matching contrastive module is designed to learn representations of local regions which is beneficial for semantic segmentation.
arXiv Detail & Related papers (2021-06-20T03:03:40Z) - Learning to Focus: Cascaded Feature Matching Network for Few-shot Image
Recognition [38.49419948988415]
Deep networks can learn to accurately recognize objects of a category by training on a large number of images.
A meta-learning challenge known as a low-shot image recognition task comes when only a few images with annotations are available for learning a recognition model for one category.
Our method, called Cascaded Feature Matching Network (CFMN), is proposed to solve this problem.
Experiments for few-shot learning on two standard datasets, emphminiImageNet and Omniglot, have confirmed the effectiveness of our method.
arXiv Detail & Related papers (2021-01-13T11:37:28Z) - BSNet: Bi-Similarity Network for Few-shot Fine-grained Image
Classification [35.50808687239441]
We propose a so-called textitBi-Similarity Network (textitBSNet)
The bi-similarity module learns feature maps according to two similarity measures of diverse characteristics.
In this way, the model is enabled to learn more discriminative and less similarity-biased features from few shots of fine-grained images.
arXiv Detail & Related papers (2020-11-29T08:38:17Z) - Remote Sensing Image Scene Classification with Self-Supervised Paradigm
under Limited Labeled Samples [11.025191332244919]
We introduce new self-supervised learning (SSL) mechanism to obtain the high-performance pre-training model for RSIs scene classification from large unlabeled data.
Experiments on three commonly used RSIs scene classification datasets demonstrated that this new learning paradigm outperforms the traditional dominant ImageNet pre-trained model.
The insights distilled from our studies can help to foster the development of SSL in the remote sensing community.
arXiv Detail & Related papers (2020-10-02T09:27:19Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Inter-Image Communication for Weakly Supervised Localization [77.2171924626778]
Weakly supervised localization aims at finding target object regions using only image-level supervision.
We propose to leverage pixel-level similarities across different objects for learning more accurate object locations.
Our method achieves the Top-1 localization error rate of 45.17% on the ILSVRC validation set.
arXiv Detail & Related papers (2020-08-12T04:14:11Z) - ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image
Classification [49.87503122462432]
We introduce a novel neural network termed Relation-and-Margin learning Network (ReMarNet)
Our method assembles two networks of different backbones so as to learn the features that can perform excellently in both of the aforementioned two classification mechanisms.
Experiments on four image datasets demonstrate that our approach is effective in learning discriminative features from a small set of labeled samples.
arXiv Detail & Related papers (2020-06-27T13:50:20Z) - Learning to Compare Relation: Semantic Alignment for Few-Shot Learning [48.463122399494175]
We present a novel semantic alignment model to compare relations, which is robust to content misalignment.
We conduct extensive experiments on several few-shot learning datasets.
arXiv Detail & Related papers (2020-02-29T08:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.