Learning without Seeing nor Knowing: Towards Open Zero-Shot Learning
- URL: http://arxiv.org/abs/2103.12437v1
- Date: Tue, 23 Mar 2021 10:30:50 GMT
- Title: Learning without Seeing nor Knowing: Towards Open Zero-Shot Learning
- Authors: Federico Marmoreo, Julio Ivan Davila Carrazco, Vittorio Murino, Jacopo
Cavazza
- Abstract summary: In Generalized Zero-Shot Learning (GZSL) unseen categories can be predicted by leveraging their class embeddings.
We propose Open Zero-Shot Learning (OZSL) to extend GZSL towards the open-world settings.
We formalize OZSL as the problem of recognizing seen and unseen classes while also rejecting instances from unknown categories.
- Score: 27.283748476678117
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Generalized Zero-Shot Learning (GZSL), unseen categories (for which no
visual data are available at training time) can be predicted by leveraging
their class embeddings (e.g., a list of attributes describing them) together
with a complementary pool of seen classes (paired with both visual data and
class embeddings). Despite GZSL is arguably challenging, we posit that knowing
in advance the class embeddings, especially for unseen categories, is an actual
limit of the applicability of GZSL towards real-world scenarios. To relax this
assumption, we propose Open Zero-Shot Learning (OZSL) to extend GZSL towards
the open-world settings. We formalize OZSL as the problem of recognizing seen
and unseen classes (as in GZSL) while also rejecting instances from unknown
categories, for which neither visual data nor class embeddings are provided. We
formalize the OZSL problem introducing evaluation protocols, error metrics and
benchmark datasets. We also suggest to tackle the OZSL problem by proposing the
idea of performing unknown feature generation (instead of only unseen features
generation as done in GZSL). We achieve this by optimizing a generative process
to sample unknown class embeddings as complementary to the seen and the unseen.
We intend these results to be the ground to foster future research, extending
the standard closed-world zero-shot learning (GZSL) with the novel open-world
counterpart (OZSL).
Related papers
- LETS-GZSL: A Latent Embedding Model for Time Series Generalized Zero
Shot Learning [1.4665304971699262]
We propose a Latent Embedding for Time Series - GZSL (LETS-GZSL) model that can solve the problem of GZSL for time series classification (TSC)
Our framework is able to achieve a harmonic mean value of at least 55% on most datasets except when the number of unseen classes is greater than 3.
arXiv Detail & Related papers (2022-07-25T09:31:22Z) - OpenLDN: Learning to Discover Novel Classes for Open-World
Semi-Supervised Learning [110.40285771431687]
Semi-supervised learning (SSL) is one of the dominant approaches to address the annotation bottleneck of supervised learning.
Recent SSL methods can effectively leverage a large repository of unlabeled data to improve performance while relying on a small set of labeled data.
This work introduces OpenLDN that utilizes a pairwise similarity loss to discover novel classes.
arXiv Detail & Related papers (2022-07-05T18:51:05Z) - Generative Zero-Shot Learning for Semantic Segmentation of 3D Point
Cloud [79.99653758293277]
We present the first generative approach for both Zero-Shot Learning (ZSL) and Generalized ZSL (GZSL) on 3D data.
We show that it reaches or outperforms the state of the art on ModelNet40 classification for both inductive ZSL and inductive GZSL.
Our experiments show that our method outperforms strong baselines, which we additionally propose for this task.
arXiv Detail & Related papers (2021-08-13T13:29:27Z) - FREE: Feature Refinement for Generalized Zero-Shot Learning [86.41074134041394]
Generalized zero-shot learning (GZSL) has achieved significant progress, with many efforts dedicated to overcoming the problems of visual-semantic domain gap and seen-unseen bias.
Most existing methods directly use feature extraction models trained on ImageNet alone, ignoring the cross-dataset bias between ImageNet and GZSL benchmarks.
We propose a simple yet effective GZSL method, termed feature refinement for generalized zero-shot learning (FREE) to tackle the above problem.
arXiv Detail & Related papers (2021-07-29T08:11:01Z) - Contrastive Embedding for Generalized Zero-Shot Learning [22.050109158293402]
Generalized zero-shot learning (GZSL) aims to recognize objects from both seen and unseen classes.
Recent feature generation methods learn a generative model that can synthesize the missing visual features of unseen classes.
We propose to integrate the generation model with the embedding model, yielding a hybrid GZSL framework.
arXiv Detail & Related papers (2021-03-30T08:54:03Z) - Goal-Oriented Gaze Estimation for Zero-Shot Learning [62.52340838817908]
We introduce a novel goal-oriented gaze estimation module (GEM) to improve the discriminative attribute localization.
We aim to predict the actual human gaze location to get the visual attention regions for recognizing a novel object guided by attribute description.
This work implies the promising benefits of collecting human gaze dataset and automatic gaze estimation algorithms on high-level computer vision tasks.
arXiv Detail & Related papers (2021-03-05T02:14:57Z) - End-to-end Generative Zero-shot Learning via Few-shot Learning [76.9964261884635]
State-of-the-art approaches to Zero-Shot Learning (ZSL) train generative nets to synthesize examples conditioned on the provided metadata.
We introduce an end-to-end generative ZSL framework that uses such an approach as a backbone and feeds its synthesized output to a Few-Shot Learning algorithm.
arXiv Detail & Related papers (2021-02-08T17:35:37Z) - A Review of Generalized Zero-Shot Learning Methods [31.539434340951786]
Generalized zero-shot learning (GZSL) aims to train a model for classifying data samples under the condition that some output classes are unknown during supervised learning.
GZSL leverages semantic information of the seen (source) and unseen (target) classes to bridge the gap between both seen and unseen classes.
arXiv Detail & Related papers (2020-11-17T14:00:30Z) - Generalized Continual Zero-Shot Learning [7.097782028036196]
zero-shot learning (ZSL) aims to classify unseen classes by transferring the knowledge from seen classes to unseen classes based on the class description.
We propose a more general and practical setup for ZSL, where classes arrive sequentially in the form of a task.
We use knowledge distillation and storing and replay the few samples from previous tasks using a small episodic memory.
arXiv Detail & Related papers (2020-11-17T08:47:54Z) - Information Bottleneck Constrained Latent Bidirectional Embedding for
Zero-Shot Learning [59.58381904522967]
We propose a novel embedding based generative model with a tight visual-semantic coupling constraint.
We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces.
Our method can be easily extended to transductive ZSL setting by generating labels for unseen images.
arXiv Detail & Related papers (2020-09-16T03:54:12Z) - Leveraging Seen and Unseen Semantic Relationships for Generative
Zero-Shot Learning [14.277015352910674]
We propose a generative model that explicitly performs knowledge transfer by incorporating a novel Semantic Regularized Loss (SR-Loss)
Experiments on seven benchmark datasets demonstrate the superiority of the LsrGAN compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2020-07-19T01:25:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.