Task Aligned Generative Meta-learning for Zero-shot Learning
- URL: http://arxiv.org/abs/2103.02185v1
- Date: Wed, 3 Mar 2021 05:18:36 GMT
- Title: Task Aligned Generative Meta-learning for Zero-shot Learning
- Authors: Zhe Liu, Yun Li, Lina Yao, Xianzhi Wang, Guodong Long
- Abstract summary: We propose a Task-aligned Generative Meta-learning model for Zero-shot learning (TGMZ)
TGMZ mitigates the potentially biased training and enables meta-ZSL to accommodate real-world datasets containing diverse distributions.
Our comparisons with state-of-the-art algorithms show the improvements of 2.1%, 3.0%, 2.5%, and 7.6% achieved by TGMZ on AWA1, AWA2, CUB, and aPY datasets.
- Score: 64.16125851588437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zero-shot learning (ZSL) refers to the problem of learning to classify
instances from the novel classes (unseen) that are absent in the training set
(seen). Most ZSL methods infer the correlation between visual features and
attributes to train the classifier for unseen classes. However, such models may
have a strong bias towards seen classes during training. Meta-learning has been
introduced to mitigate the basis, but meta-ZSL methods are inapplicable when
tasks used for training are sampled from diverse distributions. In this regard,
we propose a novel Task-aligned Generative Meta-learning model for Zero-shot
learning (TGMZ). TGMZ mitigates the potentially biased training and enables
meta-ZSL to accommodate real-world datasets containing diverse distributions.
TGMZ incorporates an attribute-conditioned task-wise distribution alignment
network that projects tasks into a unified distribution to deliver an unbiased
model. Our comparisons with state-of-the-art algorithms show the improvements
of 2.1%, 3.0%, 2.5%, and 7.6% achieved by TGMZ on AWA1, AWA2, CUB, and aPY
datasets, respectively. TGMZ also outperforms competitors by 3.6% in
generalized zero-shot learning (GZSL) setting and 7.9% in our proposed
fusion-ZSL setting.
Related papers
- Bi-directional Distribution Alignment for Transductive Zero-Shot
Learning [48.80413182126543]
We propose a novel zero-shot learning model (TZSL) called Bi-VAEGAN.
It largely improves the shift by a strengthened distribution alignment between the visual and auxiliary spaces.
In benchmark evaluation, Bi-VAEGAN achieves the new state of the arts under both the standard and generalized TZSL settings.
arXiv Detail & Related papers (2023-03-15T15:32:59Z) - Targeted Attention for Generalized- and Zero-Shot Learning [0.0]
The Zero-Shot Learning (ZSL) task attempts to learn concepts without any labeled data.
We show state-of-the-art results in the Generalized Zero-Shot Learning (GZSL) setting, with Harmonic Mean R-1 of 66.14% on the CUB200 dataset.
arXiv Detail & Related papers (2022-11-17T03:55:18Z) - Efficient Gaussian Process Model on Class-Imbalanced Datasets for
Generalized Zero-Shot Learning [37.00463358780726]
We propose a Neural Network model that learns a latent feature embedding and a Gaussian Process (GP) regression model that predicts latent feature prototypes of unseen classes.
Our model is trained efficiently with a simple training strategy that mitigates the impact of class-imbalanced training data.
arXiv Detail & Related papers (2022-10-11T04:57:20Z) - Federated Zero-Shot Learning for Visual Recognition [55.65879596326147]
We propose a novel Federated Zero-Shot Learning FedZSL framework.
FedZSL learns a central model from the decentralized data residing on edge devices.
The effectiveness and robustness of FedZSL are demonstrated by extensive experiments conducted on three zero-shot benchmark datasets.
arXiv Detail & Related papers (2022-09-05T14:49:34Z) - Attribute-Modulated Generative Meta Learning for Zero-Shot
Classification [52.64680991682722]
We present the Attribute-Modulated generAtive meta-model for Zero-shot learning (AMAZ)
Our model consists of an attribute-aware modulation network and an attribute-augmented generative network.
Our empirical evaluations show that AMAZ improves state-of-the-art methods by 3.8% and 5.1% in ZSL and generalized ZSL settings, respectively.
arXiv Detail & Related papers (2021-04-22T04:16:43Z) - Meta-Learned Attribute Self-Gating for Continual Generalized Zero-Shot
Learning [82.07273754143547]
We propose a meta-continual zero-shot learning (MCZSL) approach to generalizing a model to categories unseen during training.
By pairing self-gating of attributes and scaled class normalization with meta-learning based training, we are able to outperform state-of-the-art results.
arXiv Detail & Related papers (2021-02-23T18:36:14Z) - End-to-end Generative Zero-shot Learning via Few-shot Learning [76.9964261884635]
State-of-the-art approaches to Zero-Shot Learning (ZSL) train generative nets to synthesize examples conditioned on the provided metadata.
We introduce an end-to-end generative ZSL framework that uses such an approach as a backbone and feeds its synthesized output to a Few-Shot Learning algorithm.
arXiv Detail & Related papers (2021-02-08T17:35:37Z) - Addressing target shift in zero-shot learning using grouped adversarial
learning [1.3857063881574483]
We present a new paradigm for zero-shot learning (ZSL) that: (i) utilizes the class-attribute mapping of unseen classes to estimate the change in target distribution (target shift); and (ii) propose a novel technique called grouped Adversarial Learning (gAL) to reduce negative effects of this shift.
Our approach is widely applicable for several existing ZSL algorithms, including those with implicit attribute predictions.
arXiv Detail & Related papers (2020-03-02T13:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.