Continual Local Replacement for Few-shot Learning
- URL: http://arxiv.org/abs/2001.08366v2
- Date: Tue, 10 Mar 2020 13:21:57 GMT
- Title: Continual Local Replacement for Few-shot Learning
- Authors: Canyu Le, Zhonggui Chen, Xihan Wei, Biao Wang, Lei Zhang
- Abstract summary: The goal of few-shot learning is to learn a model that can recognize novel classes based on one or few training data.
It is challenging mainly due to two aspects: (1) it lacks good feature representation of novel classes; (2) a few of labeled data could not accurately represent the true data distribution.
A novel continual local replacement strategy is proposed to address the data deficiency problem.
- Score: 13.956960291580938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of few-shot learning is to learn a model that can recognize novel
classes based on one or few training data. It is challenging mainly due to two
aspects: (1) it lacks good feature representation of novel classes; (2) a few
of labeled data could not accurately represent the true data distribution and
thus it's hard to learn a good decision function for classification. In this
work, we use a sophisticated network architecture to learn better feature
representation and focus on the second issue. A novel continual local
replacement strategy is proposed to address the data deficiency problem. It
takes advantage of the content in unlabeled images to continually enhance
labeled ones. Specifically, a pseudo labeling method is adopted to constantly
select semantically similar images on the fly. Original labeled images will be
locally replaced by the selected images for the next epoch training. In this
way, the model can directly learn new semantic information from unlabeled
images and the capacity of supervised signals in the embedding space can be
significantly enlarged. This allows the model to improve generalization and
learn a better decision boundary for classification. Our method is conceptually
simple and easy to implement. Extensive experiments demonstrate that it can
achieve state-of-the-art results on various few-shot image recognition
benchmarks.
Related papers
- Few-shot Class-Incremental Semantic Segmentation via Pseudo-Labeling and
Knowledge Distillation [3.4436201325139737]
We address the problem of learning new classes for semantic segmentation models from few examples.
For learning from limited data, we propose a pseudo-labeling strategy to augment the few-shot training annotations.
We integrate the above steps into a single convolutional neural network with a unified learning objective.
arXiv Detail & Related papers (2023-08-05T05:05:37Z) - SATS: Self-Attention Transfer for Continual Semantic Segmentation [50.51525791240729]
continual semantic segmentation suffers from the same catastrophic forgetting issue as in continual classification learning.
This study proposes to transfer a new type of information relevant to knowledge, i.e. the relationships between elements within each image.
The relationship information can be effectively obtained from the self-attention maps in a Transformer-style segmentation model.
arXiv Detail & Related papers (2022-03-15T06:09:28Z) - Multi-label Iterated Learning for Image Classification with Label
Ambiguity [3.5736176624479654]
We propose multi-label iterated learning (MILe) to incorporate the inductive biases of multi-label learning from single labels.
MILe is a simple yet effective procedure that builds a multi-label description of the image by propagating binary predictions.
We show that MILe is effective reducing label noise, achieving state-of-the-art performance on real-world large-scale noisy data such as WebVision.
arXiv Detail & Related papers (2021-11-23T22:10:00Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images [79.34600869202373]
We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
arXiv Detail & Related papers (2021-05-25T12:22:11Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z) - An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human
Pose Estimation [80.02124918255059]
Semi-supervised learning aims to boost the accuracy of a model by exploring unlabeled images.
We learn two networks to mutually teach each other.
The more reliable predictions on easy images in each network are used to teach the other network to learn about the corresponding hard images.
arXiv Detail & Related papers (2020-11-25T03:29:52Z) - Multi-label Zero-shot Classification by Learning to Transfer from
External Knowledge [36.04579549557464]
Multi-label zero-shot classification aims to predict multiple unseen class labels for an input image.
This paper introduces a novel multi-label zero-shot classification framework by learning to transfer from external knowledge.
arXiv Detail & Related papers (2020-07-30T17:26:46Z) - One-Shot Image Classification by Learning to Restore Prototypes [11.448423413463916]
One-shot image classification aims to train image classifiers over the dataset with only one image per category.
For one-shot learning, the existing metric learning approaches would suffer poor performance because the single training image may not be representative of the class.
We propose a simple yet effective regression model, denoted by RestoreNet, which learns a class transformation on the image feature to move the image closer to the class center in the feature space.
arXiv Detail & Related papers (2020-05-04T02:11:30Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.