Few Shot Learning With No Labels
- URL: http://arxiv.org/abs/2012.13751v1
- Date: Sat, 26 Dec 2020 14:40:12 GMT
- Title: Few Shot Learning With No Labels
- Authors: Aditya Bharti, N.B. Vineeth, C.V. Jawahar
- Abstract summary: Few-shot learners aim to recognize new categories given only a small number of training samples.
The core challenge is to avoid overfitting to the limited data while ensuring good generalization to novel classes.
Existing literature makes use of vast amounts of annotated data by simply shifting the label requirement from novel classes to base classes.
- Score: 28.91314299138311
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Few-shot learners aim to recognize new categories given only a small number
of training samples. The core challenge is to avoid overfitting to the limited
data while ensuring good generalization to novel classes. Existing literature
makes use of vast amounts of annotated data by simply shifting the label
requirement from novel classes to base classes. Since data annotation is
time-consuming and costly, reducing the label requirement even further is an
important goal. To that end, our paper presents a more challenging few-shot
setting where no label access is allowed during training or testing. By
leveraging self-supervision for learning image representations and image
similarity for classification at test time, we achieve competitive baselines
while using \textbf{zero} labels, which is at least fewer labels than
state-of-the-art. We hope that this work is a step towards developing few-shot
learning methods which do not depend on annotated data at all. Our code will be
publicly released.
Related papers
- Active Generalized Category Discovery [60.69060965936214]
Generalized Category Discovery (GCD) endeavors to cluster unlabeled samples from both novel and old classes.
We take the spirit of active learning and propose a new setting called Active Generalized Category Discovery (AGCD)
Our method achieves state-of-the-art performance on both generic and fine-grained datasets.
arXiv Detail & Related papers (2024-03-07T07:12:24Z) - Few-shot Class-Incremental Semantic Segmentation via Pseudo-Labeling and
Knowledge Distillation [3.4436201325139737]
We address the problem of learning new classes for semantic segmentation models from few examples.
For learning from limited data, we propose a pseudo-labeling strategy to augment the few-shot training annotations.
We integrate the above steps into a single convolutional neural network with a unified learning objective.
arXiv Detail & Related papers (2023-08-05T05:05:37Z) - Robust Assignment of Labels for Active Learning with Sparse and Noisy
Annotations [0.17188280334580192]
Supervised classification algorithms are used to solve a growing number of real-life problems around the globe.
Unfortunately, acquiring good-quality annotations for many tasks is infeasible or too expensive to be done in practice.
We propose two novel annotation unification algorithms that utilize unlabeled parts of the sample space.
arXiv Detail & Related papers (2023-07-25T19:40:41Z) - Weakly Supervised Semantic Segmentation using Out-of-Distribution Data [50.45689349004041]
Weakly supervised semantic segmentation (WSSS) methods are often built on pixel-level localization maps.
We propose a novel source of information to distinguish foreground from the background: Out-of-Distribution (OoD) data.
arXiv Detail & Related papers (2022-03-08T05:33:35Z) - Weak Novel Categories without Tears: A Survey on Weak-Shot Learning [10.668094663201385]
It is time-consuming and labor-intensive to collect abundant fully-annotated training data for all categories.
weak-shot learning can also be treated as weakly supervised learning with auxiliary fully supervised categories.
arXiv Detail & Related papers (2021-10-06T11:04:36Z) - Towards Good Practices for Efficiently Annotating Large-Scale Image
Classification Datasets [90.61266099147053]
We investigate efficient annotation strategies for collecting multi-class classification labels for a large collection of images.
We propose modifications and best practices aimed at minimizing human labeling effort.
Simulated experiments on a 125k image subset of the ImageNet100 show that it can be annotated to 80% top-1 accuracy with 0.35 annotations per image on average.
arXiv Detail & Related papers (2021-04-26T16:29:32Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - All Labels Are Not Created Equal: Enhancing Semi-supervision via Label
Grouping and Co-training [32.45488147013166]
Pseudo-labeling is a key component in semi-supervised learning (SSL)
We propose SemCo, a method which leverages label semantics and co-training to address this problem.
We show that our method achieves state-of-the-art performance across various SSL tasks including 5.6% accuracy improvement on Mini-ImageNet dataset with 1000 labeled examples.
arXiv Detail & Related papers (2021-04-12T07:33:16Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z) - One-bit Supervision for Image Classification [121.87598671087494]
One-bit supervision is a novel setting of learning from incomplete annotations.
We propose a multi-stage training paradigm which incorporates negative label suppression into an off-the-shelf semi-supervised learning algorithm.
arXiv Detail & Related papers (2020-09-14T03:06:23Z) - Multi-label Zero-shot Classification by Learning to Transfer from
External Knowledge [36.04579549557464]
Multi-label zero-shot classification aims to predict multiple unseen class labels for an input image.
This paper introduces a novel multi-label zero-shot classification framework by learning to transfer from external knowledge.
arXiv Detail & Related papers (2020-07-30T17:26:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.