Learning Semantic Ambiguities for Zero-Shot Learning
- URL: http://arxiv.org/abs/2201.01823v1
- Date: Wed, 5 Jan 2022 21:08:29 GMT
- Title: Learning Semantic Ambiguities for Zero-Shot Learning
- Authors: Celina Hanouti and Herv\'e Le Borgne
- Abstract summary: We propose a regularization method that can be applied to any conditional generative-based ZSL method.
It learns to synthesize discriminative features for possible semantic description that are not available at training time, that is the unseen ones.
The approach is evaluated for ZSL and GZSL on four datasets commonly used in the literature.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zero-shot learning (ZSL) aims at recognizing classes for which no visual
sample is available at training time. To address this issue, one can rely on a
semantic description of each class. A typical ZSL model learns a mapping
between the visual samples of seen classes and the corresponding semantic
descriptions, in order to do the same on unseen classes at test time. State of
the art approaches rely on generative models that synthesize visual features
from the prototype of a class, such that a classifier can then be learned in a
supervised manner. However, these approaches are usually biased towards seen
classes whose visual instances are the only one that can be matched to a given
class prototype. We propose a regularization method that can be applied to any
conditional generative-based ZSL method, by leveraging only the semantic class
prototypes. It learns to synthesize discriminative features for possible
semantic description that are not available at training time, that is the
unseen ones. The approach is evaluated for ZSL and GZSL on four datasets
commonly used in the literature, either in inductive and transductive settings,
with results on-par or above state of the art approaches.
Related papers
- Evolving Semantic Prototype Improves Generative Zero-Shot Learning [73.07035277030573]
In zero-shot learning (ZSL), generative methods synthesize class-related sample features based on predefined semantic prototypes.
We observe that each class's predefined semantic prototype does not accurately match its real semantic prototype.
We propose a dynamic semantic prototype evolving (DSP) method to align the empirically predefined semantic prototypes and the real prototypes for class-related feature synthesis.
arXiv Detail & Related papers (2023-06-12T08:11:06Z) - Prompting Language-Informed Distribution for Compositional Zero-Shot Learning [73.49852821602057]
Compositional zero-shot learning (CZSL) task aims to recognize unseen compositional visual concepts.
We propose a model by prompting the language-informed distribution, aka., PLID, for the task.
Experimental results on MIT-States, UT-Zappos, and C-GQA datasets show the superior performance of the PLID to the prior arts.
arXiv Detail & Related papers (2023-05-23T18:00:22Z) - Learning Prototype via Placeholder for Zero-shot Recognition [18.204927316433448]
We propose to learn prototypes via placeholders, termed LPL, to eliminate the domain shift between seen and unseen classes.
We exploit a novel semantic-oriented fine-tuning to guarantee the semantic reliability of placeholders.
Experiments on five benchmark datasets demonstrate the significant performance gain of LPL over the state-of-the-art methods.
arXiv Detail & Related papers (2022-07-29T09:56:44Z) - Rich Semantics Improve Few-shot Learning [49.11659525563236]
We show that by using 'class-level' language descriptions, that can be acquired with minimal annotation cost, we can improve the few-shot learning performance.
We develop a Transformer based forward and backward encoding mechanism to relate visual and semantic tokens.
arXiv Detail & Related papers (2021-04-26T16:48:27Z) - OntoZSL: Ontology-enhanced Zero-shot Learning [19.87808305218359]
Key to implementing Zero-shot Learning (ZSL) is to leverage the prior knowledge of classes which builds the semantic relationship between classes.
In this paper, we explore richer and more competitive prior knowledge to model the inter-class relationship for ZSL.
To address the data imbalance between seen classes and unseen classes, we developed a generative ZSL framework with Generative Adversarial Networks (GANs)
arXiv Detail & Related papers (2021-02-15T04:39:58Z) - Zero-shot Learning with Deep Neural Networks for Object Recognition [8.572654816871873]
Zero-shot learning deals with the ability to recognize objects without any visual training sample.
This chapter presents a review of the approaches based on deep neural networks to tackle the ZSL problem.
arXiv Detail & Related papers (2021-02-05T12:27:42Z) - CLASTER: Clustering with Reinforcement Learning for Zero-Shot Action
Recognition [52.66360172784038]
We propose a clustering-based model, which considers all training samples at once, instead of optimizing for each instance individually.
We call the proposed method CLASTER and observe that it consistently improves over the state-of-the-art in all standard datasets.
arXiv Detail & Related papers (2021-01-18T12:46:24Z) - Attribute Propagation Network for Graph Zero-shot Learning [57.68486382473194]
We introduce the attribute propagation network (APNet), which is composed of 1) a graph propagation model generating attribute vector for each class and 2) a parameterized nearest neighbor (NN) classifier.
APNet achieves either compelling performance or new state-of-the-art results in experiments with two zero-shot learning settings and five benchmark datasets.
arXiv Detail & Related papers (2020-09-24T16:53:40Z) - Information Bottleneck Constrained Latent Bidirectional Embedding for
Zero-Shot Learning [59.58381904522967]
We propose a novel embedding based generative model with a tight visual-semantic coupling constraint.
We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces.
Our method can be easily extended to transductive ZSL setting by generating labels for unseen images.
arXiv Detail & Related papers (2020-09-16T03:54:12Z) - Webly Supervised Semantic Embeddings for Large Scale Zero-Shot Learning [8.472636806304273]
Zero-shot learning (ZSL) makes object recognition in images possible in absence of visual training data for a part of the classes from a dataset.
We focus on the problem of semantic class prototype design for large scale ZSL.
We investigate the use of noisy textual metadata associated to photos as text collections.
arXiv Detail & Related papers (2020-08-06T21:33:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.