Learning Clusterable Visual Features for Zero-Shot Recognition
- URL: http://arxiv.org/abs/2010.03245v2
- Date: Wed, 14 Oct 2020 17:34:34 GMT
- Title: Learning Clusterable Visual Features for Zero-Shot Recognition
- Authors: Jingyi Xu and Zhixin Shu and Dimitris Samaras
- Abstract summary: In zero-shot learning (ZSL), conditional generators have been widely used to generate additional training features.
In this paper, we propose to learn clusterable features for ZSL problems.
Experiments on SUN,CUB, and AWA2 datasets show consistent improvement over previous state-of-the-art ZSL results.
- Score: 38.8104394191698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In zero-shot learning (ZSL), conditional generators have been widely used to
generate additional training features. These features can then be used to train
the classifiers for testing data. However, some testing data are considered
"hard" as they lie close to the decision boundaries and are prone to
misclassification, leading to performance degradation for ZSL. In this paper,
we propose to learn clusterable features for ZSL problems. Using a Conditional
Variational Autoencoder (CVAE) as the feature generator, we project the
original features to a new feature space supervised by an auxiliary
classification loss. To further increase clusterability, we fine-tune the
features using Gaussian similarity loss. The clusterable visual features are
not only more suitable for CVAE reconstruction but are also more separable
which improves classification accuracy. Moreover, we introduce Gaussian noise
to enlarge the intra-class variance of the generated features, which helps to
improve the classifier's robustness. Our experiments on SUN,CUB, and AWA2
datasets show consistent improvement over previous state-of-the-art ZSL results
by a large margin. In addition to its effectiveness on zero-shot
classification, experiments show that our method to increase feature
clusterability benefits few-shot learning algorithms as well.
Related papers
- A Feature Generator for Few-Shot Learning [2.4500728886415137]
Few-shot learning aims to enable models to recognize novel objects or classes with limited labelled data.
We introduce a feature generator that creates visual features from class-level textual descriptions.
Our results show a significant improvement in accuracy over baseline methods.
arXiv Detail & Related papers (2024-09-21T13:31:12Z) - FeCAM: Exploiting the Heterogeneity of Class Distributions in
Exemplar-Free Continual Learning [21.088762527081883]
Exemplar-free class-incremental learning (CIL) poses several challenges since it prohibits the rehearsal of data from previous tasks.
Recent approaches to incrementally learning the classifier by freezing the feature extractor after the first task have gained much attention.
We explore prototypical networks for CIL, which generate new class prototypes using the frozen feature extractor and classify the features based on the Euclidean distance to the prototypes.
arXiv Detail & Related papers (2023-09-25T11:54:33Z) - Zero-Shot Logit Adjustment [89.68803484284408]
Generalized Zero-Shot Learning (GZSL) is a semantic-descriptor-based learning technique.
In this paper, we propose a new generation-based technique to enhance the generator's effect while neglecting the improvement of the classifier.
Our experiments demonstrate that the proposed technique achieves state-of-the-art when combined with the basic generator, and it can improve various generative zero-shot learning frameworks.
arXiv Detail & Related papers (2022-04-25T17:54:55Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Self-Supervised Learning for Fine-Grained Visual Categorization [0.0]
We study the usefulness of SSL for Fine-Grained Visual Categorization (FGVC)
FGVC aims to distinguish objects of visually similar sub categories within a general category.
Our baseline achieves $86.36%$ top-1 classification accuracy on CUB-200-2011 dataset.
arXiv Detail & Related papers (2021-05-18T19:16:05Z) - DFS: A Diverse Feature Synthesis Model for Generalized Zero-Shot
Learning [12.856168667514947]
Generative based strategy has shown great potential in the Generalized Zero-Shot Learning task.
Generative based strategy has shown great potential in the Generalized Zero-Shot Learning task.
We propose to enhance the generalizability of GZSL models via improving feature diversity of unseen classes.
arXiv Detail & Related papers (2021-03-19T12:24:42Z) - Generalized Zero-Shot Learning via VAE-Conditioned Generative Flow [83.27681781274406]
Generalized zero-shot learning aims to recognize both seen and unseen classes by transferring knowledge from semantic descriptions to visual representations.
Recent generative methods formulate GZSL as a missing data problem, which mainly adopts GANs or VAEs to generate visual features for unseen classes.
We propose a conditional version of generative flows for GZSL, i.e., VAE-Conditioned Generative Flow (VAE-cFlow)
arXiv Detail & Related papers (2020-09-01T09:12:31Z) - Generalized Zero-Shot Learning Via Over-Complete Distribution [79.5140590952889]
We propose to generate an Over-Complete Distribution (OCD) using Conditional Variational Autoencoder (CVAE) of both seen and unseen classes.
The effectiveness of the framework is evaluated using both Zero-Shot Learning and Generalized Zero-Shot Learning protocols.
arXiv Detail & Related papers (2020-04-01T19:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.