Attribute-Modulated Generative Meta Learning for Zero-Shot
Classification
- URL: http://arxiv.org/abs/2104.10857v1
- Date: Thu, 22 Apr 2021 04:16:43 GMT
- Title: Attribute-Modulated Generative Meta Learning for Zero-Shot
Classification
- Authors: Yun Li, Zhe Liu, Lina Yao, Xianzhi Wang, Can Wang
- Abstract summary: We present the Attribute-Modulated generAtive meta-model for Zero-shot learning (AMAZ)
Our model consists of an attribute-aware modulation network and an attribute-augmented generative network.
Our empirical evaluations show that AMAZ improves state-of-the-art methods by 3.8% and 5.1% in ZSL and generalized ZSL settings, respectively.
- Score: 52.64680991682722
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zero-shot learning (ZSL) aims to transfer knowledge from seen classes to
semantically related unseen classes, which are absent during training. The
promising strategies for ZSL are to synthesize visual features of unseen
classes conditioned on semantic side information and to incorporate
meta-learning to eliminate the model's inherent bias towards seen classes.
Existing meta generative approaches pursue a common model shared across task
distributions; in contrast, we aim to construct a generative network adaptive
to task characteristics. To this end, we propose the Attribute-Modulated
generAtive meta-model for Zero-shot learning (AMAZ). Our model consists of an
attribute-aware modulation network and an attribute-augmented generative
network. Given unseen classes, the modulation network adaptively modulates the
generator by applying task-specific transformations so that the generative
network can adapt to highly diverse tasks. Our empirical evaluations on four
widely-used benchmarks show that AMAZ improves state-of-the-art methods by 3.8%
and 5.1% in ZSL and generalized ZSL settings, respectively, demonstrating the
superiority of our method.
Related papers
- Meta-Learned Attribute Self-Interaction Network for Continual and
Generalized Zero-Shot Learning [46.6282595346048]
Zero-shot learning (ZSL) is a promising approach to generalizing a model to unseen categories during training.
We propose a Meta-learned Attribute self-Interaction Network (MAIN) for continual ZSL.
By pairing attribute self-interaction trained using meta-learning with inverse regularization of the attribute encoder, we are able to outperform state-of-the-art results without leveraging the unseen class attributes.
arXiv Detail & Related papers (2023-12-02T16:23:01Z) - GSMFlow: Generation Shifts Mitigating Flow for Generalized Zero-Shot
Learning [55.79997930181418]
Generalized Zero-Shot Learning aims to recognize images from both the seen and unseen classes by transferring semantic knowledge from seen to unseen classes.
It is a promising solution to take the advantage of generative models to hallucinate realistic unseen samples based on the knowledge learned from the seen classes.
We propose a novel flow-based generative framework that consists of multiple conditional affine coupling layers for learning unseen data generation.
arXiv Detail & Related papers (2022-07-05T04:04:37Z) - DUET: Cross-modal Semantic Grounding for Contrastive Zero-shot Learning [37.48292304239107]
We present a transformer-based end-to-end ZSL method named DUET.
We develop a cross-modal semantic grounding network to investigate the model's capability of disentangling semantic attributes from the images.
We find that DUET can often achieve state-of-the-art performance, its components are effective and its predictions are interpretable.
arXiv Detail & Related papers (2022-07-04T11:12:12Z) - Zero-shot Learning with Class Description Regularization [10.739164530098755]
We introduce a novel form of regularization that encourages generative ZSL models to pay more attention to the description of each category.
Our empirical results demonstrate improvements over the performance of multiple state-of-the-art models on the task of generalized zero-shot recognition and classification.
arXiv Detail & Related papers (2021-06-30T14:56:15Z) - Task Aligned Generative Meta-learning for Zero-shot Learning [64.16125851588437]
We propose a Task-aligned Generative Meta-learning model for Zero-shot learning (TGMZ)
TGMZ mitigates the potentially biased training and enables meta-ZSL to accommodate real-world datasets containing diverse distributions.
Our comparisons with state-of-the-art algorithms show the improvements of 2.1%, 3.0%, 2.5%, and 7.6% achieved by TGMZ on AWA1, AWA2, CUB, and aPY datasets.
arXiv Detail & Related papers (2021-03-03T05:18:36Z) - Meta-Learned Attribute Self-Gating for Continual Generalized Zero-Shot
Learning [82.07273754143547]
We propose a meta-continual zero-shot learning (MCZSL) approach to generalizing a model to categories unseen during training.
By pairing self-gating of attributes and scaled class normalization with meta-learning based training, we are able to outperform state-of-the-art results.
arXiv Detail & Related papers (2021-02-23T18:36:14Z) - Attribute Propagation Network for Graph Zero-shot Learning [57.68486382473194]
We introduce the attribute propagation network (APNet), which is composed of 1) a graph propagation model generating attribute vector for each class and 2) a parameterized nearest neighbor (NN) classifier.
APNet achieves either compelling performance or new state-of-the-art results in experiments with two zero-shot learning settings and five benchmark datasets.
arXiv Detail & Related papers (2020-09-24T16:53:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.