Learn from Anywhere: Rethinking Generalized Zero-Shot Learning with
Limited Supervision
- URL: http://arxiv.org/abs/2107.04952v2
- Date: Wed, 14 Jul 2021 01:28:32 GMT
- Title: Learn from Anywhere: Rethinking Generalized Zero-Shot Learning with
Limited Supervision
- Authors: Gaurav Bhatt, Shivam Chandhok and Vineeth N Balasubramanian
- Abstract summary: We present a practical setting of inductive zero and few-shot learning, where unlabeled images from other out-of-data classes can be used to improve generalization.
We leverage a formulation based on product-of-experts and introduce a new AUD module that enables us to use unlabeled samples from out-of-data classes.
- Score: 16.12500804569801
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A common problem with most zero and few-shot learning approaches is they
suffer from bias towards seen classes resulting in sub-optimal performance.
Existing efforts aim to utilize unlabeled images from unseen classes (i.e
transductive zero-shot) during training to enable generalization. However, this
limits their use in practical scenarios where data from target unseen classes
is unavailable or infeasible to collect. In this work, we present a practical
setting of inductive zero and few-shot learning, where unlabeled images from
other out-of-data classes, that do not belong to seen or unseen categories, can
be used to improve generalization in any-shot learning. We leverage a
formulation based on product-of-experts and introduce a new AUD module that
enables us to use unlabeled samples from out-of-data classes which are usually
easily available and practically entail no annotation cost. In addition, we
also demonstrate the applicability of our model to address a more practical and
challenging, Generalized Zero-shot under a limited supervision setting, where
even base seen classes do not have sufficient annotated samples.
Related papers
- Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation [14.225723195634941]
We propose a novel approach to prompt learning based on unsupervised knowledge distillation from more powerful models.
Our approach, which we call Knowledge Distillation Prompt Learning (KDPL), can be integrated into existing prompt learning techniques.
arXiv Detail & Related papers (2024-07-03T12:24:40Z) - Liberating Seen Classes: Boosting Few-Shot and Zero-Shot Text Classification via Anchor Generation and Classification Reframing [38.84431954053434]
Few-shot and zero-shot text classification aim to recognize samples from novel classes with limited labeled samples or no labeled samples at all.
We propose a simple and effective strategy for few-shot and zero-shot text classification.
arXiv Detail & Related papers (2024-05-06T15:38:32Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Understanding prompt engineering may not require rethinking
generalization [56.38207873589642]
We show that the discrete nature of prompts, combined with a PAC-Bayes prior given by a language model, results in generalization bounds that are remarkably tight by the standards of the literature.
This work provides a possible justification for the widespread practice of prompt engineering.
arXiv Detail & Related papers (2023-10-06T00:52:48Z) - Evaluating Zero-cost Active Learning for Object Detection [4.106771265655055]
Object detection requires substantial labeling effort for learning robust models.
Active learning can reduce this effort by intelligently selecting relevant examples to be annotated.
We show that a key ingredient is not only the score on a bounding box level but also the technique used for aggregating the scores for ranking images.
arXiv Detail & Related papers (2022-12-08T11:48:39Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - FREE: Feature Refinement for Generalized Zero-Shot Learning [86.41074134041394]
Generalized zero-shot learning (GZSL) has achieved significant progress, with many efforts dedicated to overcoming the problems of visual-semantic domain gap and seen-unseen bias.
Most existing methods directly use feature extraction models trained on ImageNet alone, ignoring the cross-dataset bias between ImageNet and GZSL benchmarks.
We propose a simple yet effective GZSL method, termed feature refinement for generalized zero-shot learning (FREE) to tackle the above problem.
arXiv Detail & Related papers (2021-07-29T08:11:01Z) - CLASTER: Clustering with Reinforcement Learning for Zero-Shot Action
Recognition [52.66360172784038]
We propose a clustering-based model, which considers all training samples at once, instead of optimizing for each instance individually.
We call the proposed method CLASTER and observe that it consistently improves over the state-of-the-art in all standard datasets.
arXiv Detail & Related papers (2021-01-18T12:46:24Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.