Vision Transformer-based Feature Extraction for Generalized Zero-Shot
Learning
- URL: http://arxiv.org/abs/2302.00875v1
- Date: Thu, 2 Feb 2023 04:52:08 GMT
- Title: Vision Transformer-based Feature Extraction for Generalized Zero-Shot
Learning
- Authors: Jiseob Kim, Kyuhong Shim, Junhan Kim, Byonghyo Shim
- Abstract summary: Generalized zero-shot learning (GZSL) is a technique to train a deep learning model to identify unseen classes using the image attribute.
In this paper, we put forth a new GZSL approach exploiting Vision Transformer (ViT) to maximize the attribute-related information contained in the image feature.
- Score: 24.589101099475947
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generalized zero-shot learning (GZSL) is a technique to train a deep learning
model to identify unseen classes using the image attribute. In this paper, we
put forth a new GZSL approach exploiting Vision Transformer (ViT) to maximize
the attribute-related information contained in the image feature. In ViT, the
entire image region is processed without the degradation of the image
resolution and the local image information is preserved in patch features. To
fully enjoy these benefits of ViT, we exploit patch features as well as the CLS
feature in extracting the attribute-related image feature. In particular, we
propose a novel attention-based module, called attribute attention module
(AAM), to aggregate the attribute-related information in patch features. In
AAM, the correlation between each patch feature and the synthetic image
attribute is used as the importance weight for each patch. From extensive
experiments on benchmark datasets, we demonstrate that the proposed technique
outperforms the state-of-the-art GZSL approaches by a large margin.
Related papers
- Deep Semantic-Visual Alignment for Zero-Shot Remote Sensing Image Scene
Classification [26.340737217001497]
Zero-shot learning (ZSL) allows for identifying novel classes that are not seen during training.
Previous ZSL models mainly depend on manually-labeled attributes or word embeddings extracted from language models to transfer knowledge from seen classes to novel classes.
We propose to collect visually detectable attributes automatically. We predict attributes for each class by depicting the semantic-visual similarity between attributes and images.
arXiv Detail & Related papers (2024-02-03T09:18:49Z) - Patch-level Representation Learning for Self-supervised Vision
Transformers [68.8862419248863]
Vision Transformers (ViTs) have gained much attention recently as a better architectural choice, often outperforming convolutional networks for various visual tasks.
Inspired by this, we design a simple yet effective visual pretext task, coined SelfPatch, for learning better patch-level representations.
We demonstrate that SelfPatch can significantly improve the performance of existing SSL methods for various visual tasks.
arXiv Detail & Related papers (2022-06-16T08:01:19Z) - Attribute Prototype Network for Any-Shot Learning [113.50220968583353]
We argue that an image representation with integrated attribute localization ability would be beneficial for any-shot, i.e. zero-shot and few-shot, image classification tasks.
We propose a novel representation learning framework that jointly learns global and local features using only class-level attributes.
arXiv Detail & Related papers (2022-04-04T02:25:40Z) - Semantic Feature Extraction for Generalized Zero-shot Learning [23.53412767106488]
Generalized zero-shot learning (GZSL) is a technique to train a deep learning model to identify unseen classes using the attribute.
In this paper, we put forth a new GZSL technique that improves the GZSL classification performance greatly.
arXiv Detail & Related papers (2021-12-29T09:52:30Z) - TransZero++: Cross Attribute-Guided Transformer for Zero-Shot Learning [119.43299939907685]
Zero-shot learning (ZSL) tackles the novel class recognition problem by transferring semantic knowledge from seen classes to unseen ones.
Existing attention-based models have struggled to learn inferior region features in a single image by solely using unidirectional attention.
We propose a cross attribute-guided Transformer network, termed TransZero++, to refine visual features and learn accurate attribute localization for semantic-augmented visual embedding representations.
arXiv Detail & Related papers (2021-12-16T05:49:51Z) - Goal-Oriented Gaze Estimation for Zero-Shot Learning [62.52340838817908]
We introduce a novel goal-oriented gaze estimation module (GEM) to improve the discriminative attribute localization.
We aim to predict the actual human gaze location to get the visual attention regions for recognizing a novel object guided by attribute description.
This work implies the promising benefits of collecting human gaze dataset and automatic gaze estimation algorithms on high-level computer vision tasks.
arXiv Detail & Related papers (2021-03-05T02:14:57Z) - Semantic Disentangling Generalized Zero-Shot Learning [50.259058462272435]
Generalized Zero-Shot Learning (GZSL) aims to recognize images from both seen and unseen categories.
In this paper, we propose a novel feature disentangling approach based on an encoder-decoder architecture.
The proposed model aims to distill quality semantic-consistent representations that capture intrinsic features of seen images.
arXiv Detail & Related papers (2021-01-20T05:46:21Z) - Attribute Prototype Network for Zero-Shot Learning [113.50220968583353]
We propose a novel zero-shot representation learning framework that jointly learns discriminative global and local features.
Our model points to the visual evidence of the attributes in an image, confirming the improved attribute localization ability of our image representation.
arXiv Detail & Related papers (2020-08-19T06:46:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.