On Implicit Attribute Localization for Generalized Zero-Shot Learning
- URL: http://arxiv.org/abs/2103.04704v1
- Date: Mon, 8 Mar 2021 12:31:37 GMT
- Title: On Implicit Attribute Localization for Generalized Zero-Shot Learning
- Authors: Shiqi Yang, Kai Wang, Luis Herranz, Joost van de Weijer
- Abstract summary: We show that common ZSL backbones can implicitly localize attributes, yet this property is not exploited.
We then propose SELAR, a simple method that further encourages attribute localization, surprisingly achieving very competitive generalized ZSL (GZSL) performance.
- Score: 43.61533666141709
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zero-shot learning (ZSL) aims to discriminate images from unseen classes by
exploiting relations to seen classes via their attribute-based descriptions.
Since attributes are often related to specific parts of objects, many recent
works focus on discovering discriminative regions. However, these methods
usually require additional complex part detection modules or attention
mechanisms. In this paper, 1) we show that common ZSL backbones (without
explicit attention nor part detection) can implicitly localize attributes, yet
this property is not exploited. 2) Exploiting it, we then propose SELAR, a
simple method that further encourages attribute localization, surprisingly
achieving very competitive generalized ZSL (GZSL) performance when compared
with more complex state-of-the-art methods. Our findings provide useful insight
for designing future GZSL methods, and SELAR provides an easy to implement yet
strong baseline.
Related papers
- CREST: Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-Shot Learning [48.46511584490582]
Zero-shot learning (ZSL) enables the recognition of novel classes by leveraging semantic knowledge transfer from known to unknown categories.
Real-world challenges such as distribution imbalances and attribute co-occurrence hinder the discernment of local variances in images.
We propose a bidirectional cross-modal ZSL approach CREST to overcome these challenges.
arXiv Detail & Related papers (2024-04-15T10:19:39Z) - Attribute-Aware Representation Rectification for Generalized Zero-Shot
Learning [19.65026043141699]
Generalized Zero-shot Learning (GZSL) has yielded remarkable performance by designing a series of unbiased visual-semantics mappings.
We propose a simple yet effective Attribute-Aware Representation Rectification framework for GZSL, dubbed $mathbf(AR)2$.
arXiv Detail & Related papers (2023-11-23T11:30:32Z) - Learning Common Rationale to Improve Self-Supervised Representation for
Fine-Grained Visual Recognition Problems [61.11799513362704]
We propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes.
We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective.
arXiv Detail & Related papers (2023-03-03T02:07:40Z) - Semantic Feature Extraction for Generalized Zero-shot Learning [23.53412767106488]
Generalized zero-shot learning (GZSL) is a technique to train a deep learning model to identify unseen classes using the attribute.
In this paper, we put forth a new GZSL technique that improves the GZSL classification performance greatly.
arXiv Detail & Related papers (2021-12-29T09:52:30Z) - Discriminative Region-based Multi-Label Zero-Shot Learning [145.0952336375342]
Multi-label zero-shot learning (ZSL) is a more realistic counter-part of standard single-label ZSL.
We propose an alternate approach towards region-based discriminability-preserving ZSL.
arXiv Detail & Related papers (2021-08-20T17:56:47Z) - Goal-Oriented Gaze Estimation for Zero-Shot Learning [62.52340838817908]
We introduce a novel goal-oriented gaze estimation module (GEM) to improve the discriminative attribute localization.
We aim to predict the actual human gaze location to get the visual attention regions for recognizing a novel object guided by attribute description.
This work implies the promising benefits of collecting human gaze dataset and automatic gaze estimation algorithms on high-level computer vision tasks.
arXiv Detail & Related papers (2021-03-05T02:14:57Z) - Prior Knowledge about Attributes: Learning a More Effective Potential
Space for Zero-Shot Recognition [0.07161783472741746]
We build an Attribute Correlation Potential Space Generation model which uses a graph convolution network and attribute correlation to generate a more discriminating potential space.
Our approach outperforms some existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2020-09-14T06:57:23Z) - Simple and effective localized attribute representations for zero-shot
learning [48.053204004771665]
Zero-shot learning (ZSL) aims to discriminate images from unseen classes by exploiting relations to seen classes via their semantic descriptions.
We propose localizing representations in the semantic/attribute space, with a simple but effective pipeline where localization is implicit.
Our method can be implemented easily, which can be used as a new baseline for zero shot-learning.
arXiv Detail & Related papers (2020-06-10T16:46:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.