Region Semantically Aligned Network for Zero-Shot Learning
- URL: http://arxiv.org/abs/2110.07130v1
- Date: Thu, 14 Oct 2021 03:23:40 GMT
- Title: Region Semantically Aligned Network for Zero-Shot Learning
- Authors: Ziyang Wang, Yunhao Gou, Jingjing Li, Yu Zhang, Yang Yang
- Abstract summary: We propose a Region Semantically Aligned Network (RSAN) which maps local features of unseen classes to their semantic attributes.
We obtain each attribute from a specific region of the output and exploit these attributes for recognition.
Experiments on several standard ZSL datasets reveal the benefit of the proposed RSAN method, outperforming state-of-the-art methods.
- Score: 18.18665627472823
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Zero-shot learning (ZSL) aims to recognize unseen classes based on the
knowledge of seen classes. Previous methods focused on learning direct
embeddings from global features to the semantic space in hope of knowledge
transfer from seen classes to unseen classes. However, an unseen class shares
local visual features with a set of seen classes and leveraging global visual
features makes the knowledge transfer ineffective. To tackle this problem, we
propose a Region Semantically Aligned Network (RSAN), which maps local features
of unseen classes to their semantic attributes. Instead of using global
features which are obtained by an average pooling layer after an image encoder,
we directly utilize the output of the image encoder which maintains local
information of the image. Concretely, we obtain each attribute from a specific
region of the output and exploit these attributes for recognition. As a result,
the knowledge of seen classes can be successfully transferred to unseen classes
in a region-bases manner. In addition, we regularize the image encoder through
attribute regression with a semantic knowledge to extract robust and
attribute-related visual features. Experiments on several standard ZSL datasets
reveal the benefit of the proposed RSAN method, outperforming state-of-the-art
methods.
Related papers
- High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning [54.86882315023791]
We propose an innovative approach called High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning (HDAFL)
HDAFL utilizes multiple convolutional kernels to automatically learn discriminative regions highly correlated with attributes in images.
We also introduce a Transformer-based attribute discrimination encoder to enhance the discriminative capability among attributes.
arXiv Detail & Related papers (2024-04-07T13:17:47Z) - Deep Semantic-Visual Alignment for Zero-Shot Remote Sensing Image Scene
Classification [26.340737217001497]
Zero-shot learning (ZSL) allows for identifying novel classes that are not seen during training.
Previous ZSL models mainly depend on manually-labeled attributes or word embeddings extracted from language models to transfer knowledge from seen classes to novel classes.
We propose to collect visually detectable attributes automatically. We predict attributes for each class by depicting the semantic-visual similarity between attributes and images.
arXiv Detail & Related papers (2024-02-03T09:18:49Z) - Attribute Prototype Network for Any-Shot Learning [113.50220968583353]
We argue that an image representation with integrated attribute localization ability would be beneficial for any-shot, i.e. zero-shot and few-shot, image classification tasks.
We propose a novel representation learning framework that jointly learns global and local features using only class-level attributes.
arXiv Detail & Related papers (2022-04-04T02:25:40Z) - TransZero++: Cross Attribute-Guided Transformer for Zero-Shot Learning [119.43299939907685]
Zero-shot learning (ZSL) tackles the novel class recognition problem by transferring semantic knowledge from seen classes to unseen ones.
Existing attention-based models have struggled to learn inferior region features in a single image by solely using unidirectional attention.
We propose a cross attribute-guided Transformer network, termed TransZero++, to refine visual features and learn accurate attribute localization for semantic-augmented visual embedding representations.
arXiv Detail & Related papers (2021-12-16T05:49:51Z) - TransZero: Attribute-guided Transformer for Zero-Shot Learning [25.55614833575993]
Zero-shot learning (ZSL) aims to recognize novel classes by transferring semantic knowledge from seen classes to unseen ones.
We propose an attribute-guided Transformer network, TransZero, to refine visual features and learn attribute localization for discriminative visual embedding representations.
arXiv Detail & Related papers (2021-12-03T02:39:59Z) - Discriminative Region-based Multi-Label Zero-Shot Learning [145.0952336375342]
Multi-label zero-shot learning (ZSL) is a more realistic counter-part of standard single-label ZSL.
We propose an alternate approach towards region-based discriminability-preserving ZSL.
arXiv Detail & Related papers (2021-08-20T17:56:47Z) - Goal-Oriented Gaze Estimation for Zero-Shot Learning [62.52340838817908]
We introduce a novel goal-oriented gaze estimation module (GEM) to improve the discriminative attribute localization.
We aim to predict the actual human gaze location to get the visual attention regions for recognizing a novel object guided by attribute description.
This work implies the promising benefits of collecting human gaze dataset and automatic gaze estimation algorithms on high-level computer vision tasks.
arXiv Detail & Related papers (2021-03-05T02:14:57Z) - Attribute Prototype Network for Zero-Shot Learning [113.50220968583353]
We propose a novel zero-shot representation learning framework that jointly learns discriminative global and local features.
Our model points to the visual evidence of the attributes in an image, confirming the improved attribute localization ability of our image representation.
arXiv Detail & Related papers (2020-08-19T06:46:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.