Zero-Shot Logit Adjustment
- URL: http://arxiv.org/abs/2204.11822v2
- Date: Thu, 28 Apr 2022 15:21:25 GMT
- Title: Zero-Shot Logit Adjustment
- Authors: Dubing Chen, Yuming Shen, Haofeng Zhang, Philip H.S. Torr
- Abstract summary: Generalized Zero-Shot Learning (GZSL) is a semantic-descriptor-based learning technique.
In this paper, we propose a new generation-based technique to enhance the generator's effect while neglecting the improvement of the classifier.
Our experiments demonstrate that the proposed technique achieves state-of-the-art when combined with the basic generator, and it can improve various generative zero-shot learning frameworks.
- Score: 89.68803484284408
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic-descriptor-based Generalized Zero-Shot Learning (GZSL) poses
challenges in recognizing novel classes in the test phase. The development of
generative models enables current GZSL techniques to probe further into the
semantic-visual link, culminating in a two-stage form that includes a generator
and a classifier. However, existing generation-based methods focus on enhancing
the generator's effect while neglecting the improvement of the classifier. In
this paper, we first analyze of two properties of the generated pseudo unseen
samples: bias and homogeneity. Then, we perform variational Bayesian inference
to back-derive the evaluation metrics, which reflects the balance of the seen
and unseen classes. As a consequence of our derivation, the aforementioned two
properties are incorporated into the classifier training as seen-unseen priors
via logit adjustment. The Zero-Shot Logit Adjustment further puts
semantic-based classifiers into effect in generation-based GZSL. Our
experiments demonstrate that the proposed technique achieves state-of-the-art
when combined with the basic generator, and it can improve various generative
zero-shot learning frameworks. Our codes are available on
https://github.com/cdb342/IJCAI-2022-ZLA.
Related papers
- Visual-Augmented Dynamic Semantic Prototype for Generative Zero-Shot Learning [56.16593809016167]
We propose a novel Visual-Augmented Dynamic Semantic prototype method (termed VADS) to boost the generator to learn accurate semantic-visual mapping.
VADS consists of two modules: (1) Visual-aware Domain Knowledge Learning module (VDKL) learns the local bias and global prior of the visual features, which replace pure Gaussian noise to provide richer prior noise information; (2) Vision-Oriented Semantic Updation module (VOSU) updates the semantic prototype according to the visual representations of the samples.
arXiv Detail & Related papers (2024-04-23T07:39:09Z) - GSMFlow: Generation Shifts Mitigating Flow for Generalized Zero-Shot
Learning [55.79997930181418]
Generalized Zero-Shot Learning aims to recognize images from both the seen and unseen classes by transferring semantic knowledge from seen to unseen classes.
It is a promising solution to take the advantage of generative models to hallucinate realistic unseen samples based on the knowledge learned from the seen classes.
We propose a novel flow-based generative framework that consists of multiple conditional affine coupling layers for learning unseen data generation.
arXiv Detail & Related papers (2022-07-05T04:04:37Z) - Mitigating Generation Shifts for Generalized Zero-Shot Learning [52.98182124310114]
Generalized Zero-Shot Learning (GZSL) is the task of leveraging semantic information (e.g., attributes) to recognize the seen and unseen samples, where unseen classes are not observable during training.
We propose a novel Generation Shifts Mitigating Flow framework for learning unseen data synthesis efficiently and effectively.
Experimental results demonstrate that GSMFlow achieves state-of-the-art recognition performance in both conventional and generalized zero-shot settings.
arXiv Detail & Related papers (2021-07-07T11:43:59Z) - Contrastive Embedding for Generalized Zero-Shot Learning [22.050109158293402]
Generalized zero-shot learning (GZSL) aims to recognize objects from both seen and unseen classes.
Recent feature generation methods learn a generative model that can synthesize the missing visual features of unseen classes.
We propose to integrate the generation model with the embedding model, yielding a hybrid GZSL framework.
arXiv Detail & Related papers (2021-03-30T08:54:03Z) - Learning Clusterable Visual Features for Zero-Shot Recognition [38.8104394191698]
In zero-shot learning (ZSL), conditional generators have been widely used to generate additional training features.
In this paper, we propose to learn clusterable features for ZSL problems.
Experiments on SUN,CUB, and AWA2 datasets show consistent improvement over previous state-of-the-art ZSL results.
arXiv Detail & Related papers (2020-10-07T07:58:55Z) - Generalized Zero-Shot Learning via VAE-Conditioned Generative Flow [83.27681781274406]
Generalized zero-shot learning aims to recognize both seen and unseen classes by transferring knowledge from semantic descriptions to visual representations.
Recent generative methods formulate GZSL as a missing data problem, which mainly adopts GANs or VAEs to generate visual features for unseen classes.
We propose a conditional version of generative flows for GZSL, i.e., VAE-Conditioned Generative Flow (VAE-cFlow)
arXiv Detail & Related papers (2020-09-01T09:12:31Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Invertible Zero-Shot Recognition Flows [42.839333265321905]
This work incorporates a new family of generative models (i.e., flow-based models) into Zero-Shot Learning (ZSL)
The proposed Invertible Zero-shot Flow (IZF) learns factorized data embeddings with the forward pass of an invertible flow network, while the reverse pass generates data samples.
Experiments on widely-adopted ZSL benchmarks demonstrate the significant performance gain of IZF over existing methods.
arXiv Detail & Related papers (2020-07-09T15:21:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.