Weak Novel Categories without Tears: A Survey on Weak-Shot Learning
- URL: http://arxiv.org/abs/2110.02651v1
- Date: Wed, 6 Oct 2021 11:04:36 GMT
- Title: Weak Novel Categories without Tears: A Survey on Weak-Shot Learning
- Authors: Li Niu
- Abstract summary: It is time-consuming and labor-intensive to collect abundant fully-annotated training data for all categories.
weak-shot learning can also be treated as weakly supervised learning with auxiliary fully supervised categories.
- Score: 10.668094663201385
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Deep learning is a data-hungry approach, which requires massive training
data. However, it is time-consuming and labor-intensive to collect abundant
fully-annotated training data for all categories. Assuming the existence of
base categories with adequate fully-annotated training samples, different
paradigms requiring fewer training samples or weaker annotations for novel
categories have attracted growing research interest. Among them, zero-shot
(resp., few-shot) learning explores using zero (resp., a few) training samples
for novel categories, which lowers the quantity requirement for novel
categories. Instead, weak-shot learning lowers the quality requirement for
novel categories. Specifically, sufficient training samples are collected for
novel categories but they only have weak annotations. In different tasks, weak
annotations are presented in different forms (e.g., noisy labels for image
classification, image labels for object detection, bounding boxes for
segmentation), similar to the definitions in weakly supervised learning.
Therefore, weak-shot learning can also be treated as weakly supervised learning
with auxiliary fully supervised categories. In this paper, we discuss the
existing weak-shot learning methodologies in different tasks and summarize the
codes at https://github.com/bcmi/Awesome-Weak-Shot-Learning.
Related papers
- Liberating Seen Classes: Boosting Few-Shot and Zero-Shot Text Classification via Anchor Generation and Classification Reframing [38.84431954053434]
Few-shot and zero-shot text classification aim to recognize samples from novel classes with limited labeled samples or no labeled samples at all.
We propose a simple and effective strategy for few-shot and zero-shot text classification.
arXiv Detail & Related papers (2024-05-06T15:38:32Z) - A Simple Approach to Adversarial Robustness in Few-shot Image
Classification [20.889464448762176]
We show that a simple transfer-learning based approach can be used to train adversarially robust few-shot classifiers.
We also present a method for novel classification task based on calibrating the centroid of the few-shot category towards the base classes.
arXiv Detail & Related papers (2022-04-11T22:46:41Z) - Revisiting Deep Local Descriptor for Improved Few-Shot Classification [56.74552164206737]
We show how one can improve the quality of embeddings by leveraging textbfDense textbfClassification and textbfAttentive textbfPooling.
We suggest to pool feature maps by applying attentive pooling instead of the widely used global average pooling (GAP) to prepare embeddings for few-shot classification.
arXiv Detail & Related papers (2021-03-30T00:48:28Z) - Few Shot Learning With No Labels [28.91314299138311]
Few-shot learners aim to recognize new categories given only a small number of training samples.
The core challenge is to avoid overfitting to the limited data while ensuring good generalization to novel classes.
Existing literature makes use of vast amounts of annotated data by simply shifting the label requirement from novel classes to base classes.
arXiv Detail & Related papers (2020-12-26T14:40:12Z) - Closing the Generalization Gap in One-Shot Object Detection [92.82028853413516]
We show that the key to strong few-shot detection models may not lie in sophisticated metric learning approaches, but instead in scaling the number of categories.
Future data annotation efforts should therefore focus on wider datasets and annotate a larger number of categories.
arXiv Detail & Related papers (2020-11-09T09:31:17Z) - One-bit Supervision for Image Classification [121.87598671087494]
One-bit supervision is a novel setting of learning from incomplete annotations.
We propose a multi-stage training paradigm which incorporates negative label suppression into an off-the-shelf semi-supervised learning algorithm.
arXiv Detail & Related papers (2020-09-14T03:06:23Z) - Few-Shot Learning with Intra-Class Knowledge Transfer [100.87659529592223]
We consider the few-shot classification task with an unbalanced dataset.
Recent works have proposed to solve this task by augmenting the training data of the few-shot classes using generative models.
We propose to leverage the intra-class knowledge from the neighbor many-shot classes with the intuition that neighbor classes share similar statistical information.
arXiv Detail & Related papers (2020-08-22T18:15:38Z) - Few-shot Classification via Adaptive Attention [93.06105498633492]
We propose a novel few-shot learning method via optimizing and fast adapting the query sample representation based on very few reference samples.
As demonstrated experimentally, the proposed model achieves state-of-the-art classification results on various benchmark few-shot classification and fine-grained recognition datasets.
arXiv Detail & Related papers (2020-08-06T05:52:59Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z) - Sharing Matters for Generalization in Deep Metric Learning [22.243744691711452]
This work investigates how to learn characteristics that separate between classes without the need for annotations or training data.
By formulating our approach as a novel triplet sampling strategy, it can be easily applied on top of recent ranking loss frameworks.
arXiv Detail & Related papers (2020-04-12T10:21:15Z) - Few-Shot Learning with Geometric Constraints [25.22980274856574]
We consider the problem of few-shot learning for classification.
We propose two geometric constraints to fine-tune the network with a few training examples.
arXiv Detail & Related papers (2020-03-20T08:50:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.