Addressing target shift in zero-shot learning using grouped adversarial
learning
- URL: http://arxiv.org/abs/2003.00845v2
- Date: Tue, 16 Jun 2020 11:38:50 GMT
- Title: Addressing target shift in zero-shot learning using grouped adversarial
learning
- Authors: Saneem Ahmed Chemmengath (1), Soumava Paul (2), Samarth Bharadwaj (1),
Suranjana Samanta, Karthik Sankaranarayanan ((1) IBM Research, (2) IIT
Kharagpur)
- Abstract summary: We present a new paradigm for zero-shot learning (ZSL) that: (i) utilizes the class-attribute mapping of unseen classes to estimate the change in target distribution (target shift); and (ii) propose a novel technique called grouped Adversarial Learning (gAL) to reduce negative effects of this shift.
Our approach is widely applicable for several existing ZSL algorithms, including those with implicit attribute predictions.
- Score: 1.3857063881574483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zero-shot learning (ZSL) algorithms typically work by exploiting attribute
correlations to be able to make predictions in unseen classes. However, these
correlations do not remain intact at test time in most practical settings and
the resulting change in these correlations lead to adverse effects on zero-shot
learning performance. In this paper, we present a new paradigm for ZSL that:
(i) utilizes the class-attribute mapping of unseen classes to estimate the
change in target distribution (target shift), and (ii) propose a novel
technique called grouped Adversarial Learning (gAL) to reduce negative effects
of this shift. Our approach is widely applicable for several existing ZSL
algorithms, including those with implicit attribute predictions. We apply the
proposed technique ($g$AL) on three popular ZSL algorithms: ALE, SJE, and
DEVISE, and show performance improvements on 4 popular ZSL datasets: AwA2, aPY,
CUB and SUN. We obtain SOTA results on SUN and aPY datasets and achieve
comparable results on AwA2.
Related papers
- ItTakesTwo: Leveraging Peer Representations for Semi-supervised LiDAR Semantic Segmentation [24.743048965822297]
This paper introduces a novel semi-supervised LiDAR semantic segmentation framework called ItTakesTwo (IT2)
IT2 is designed to ensure consistent predictions from peer LiDAR representations, thereby improving the perturbation effectiveness in consistency learning.
Results on public benchmarks show that our approach achieves remarkable improvements over the previous state-of-the-art (SOTA) methods in the field.
arXiv Detail & Related papers (2024-07-09T18:26:53Z) - Zero-Shot Learning by Harnessing Adversarial Samples [52.09717785644816]
We propose a novel Zero-Shot Learning (ZSL) approach by Harnessing Adversarial Samples (HAS)
HAS advances ZSL through adversarial training which takes into account three crucial aspects.
We demonstrate the effectiveness of our adversarial samples approach in both ZSL and Generalized Zero-Shot Learning (GZSL) scenarios.
arXiv Detail & Related papers (2023-08-01T06:19:13Z) - Bi-directional Distribution Alignment for Transductive Zero-Shot
Learning [48.80413182126543]
We propose a novel zero-shot learning model (TZSL) called Bi-VAEGAN.
It largely improves the shift by a strengthened distribution alignment between the visual and auxiliary spaces.
In benchmark evaluation, Bi-VAEGAN achieves the new state of the arts under both the standard and generalized TZSL settings.
arXiv Detail & Related papers (2023-03-15T15:32:59Z) - Targeted Attention for Generalized- and Zero-Shot Learning [0.0]
The Zero-Shot Learning (ZSL) task attempts to learn concepts without any labeled data.
We show state-of-the-art results in the Generalized Zero-Shot Learning (GZSL) setting, with Harmonic Mean R-1 of 66.14% on the CUB200 dataset.
arXiv Detail & Related papers (2022-11-17T03:55:18Z) - Attribute-Modulated Generative Meta Learning for Zero-Shot
Classification [52.64680991682722]
We present the Attribute-Modulated generAtive meta-model for Zero-shot learning (AMAZ)
Our model consists of an attribute-aware modulation network and an attribute-augmented generative network.
Our empirical evaluations show that AMAZ improves state-of-the-art methods by 3.8% and 5.1% in ZSL and generalized ZSL settings, respectively.
arXiv Detail & Related papers (2021-04-22T04:16:43Z) - Task Aligned Generative Meta-learning for Zero-shot Learning [64.16125851588437]
We propose a Task-aligned Generative Meta-learning model for Zero-shot learning (TGMZ)
TGMZ mitigates the potentially biased training and enables meta-ZSL to accommodate real-world datasets containing diverse distributions.
Our comparisons with state-of-the-art algorithms show the improvements of 2.1%, 3.0%, 2.5%, and 7.6% achieved by TGMZ on AWA1, AWA2, CUB, and aPY datasets.
arXiv Detail & Related papers (2021-03-03T05:18:36Z) - End-to-end Generative Zero-shot Learning via Few-shot Learning [76.9964261884635]
State-of-the-art approaches to Zero-Shot Learning (ZSL) train generative nets to synthesize examples conditioned on the provided metadata.
We introduce an end-to-end generative ZSL framework that uses such an approach as a backbone and feeds its synthesized output to a Few-Shot Learning algorithm.
arXiv Detail & Related papers (2021-02-08T17:35:37Z) - Generalized Zero-Shot Learning Via Over-Complete Distribution [79.5140590952889]
We propose to generate an Over-Complete Distribution (OCD) using Conditional Variational Autoencoder (CVAE) of both seen and unseen classes.
The effectiveness of the framework is evaluated using both Zero-Shot Learning and Generalized Zero-Shot Learning protocols.
arXiv Detail & Related papers (2020-04-01T19:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.