Prior Knowledge about Attributes: Learning a More Effective Potential
Space for Zero-Shot Recognition
- URL: http://arxiv.org/abs/2009.06226v1
- Date: Mon, 14 Sep 2020 06:57:23 GMT
- Title: Prior Knowledge about Attributes: Learning a More Effective Potential
Space for Zero-Shot Recognition
- Authors: Chunlai Chai, Yukuan Lou, Shijin Zhang
- Abstract summary: We build an Attribute Correlation Potential Space Generation model which uses a graph convolution network and attribute correlation to generate a more discriminating potential space.
Our approach outperforms some existing state-of-the-art methods on several benchmark datasets.
- Score: 0.07161783472741746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zero-shot learning (ZSL) aims to recognize unseen classes accurately by
learning seen classes and known attributes, but correlations in attributes were
ignored by previous study which lead to classification results confused. To
solve this problem, we build an Attribute Correlation Potential Space
Generation (ACPSG) model which uses a graph convolution network and attribute
correlation to generate a more discriminating potential space. Combining
potential discrimination space and user-defined attribute space, we can better
classify unseen classes. Our approach outperforms some existing
state-of-the-art methods on several benchmark datasets, whether it is
conventional ZSL or generalized ZSL.
Related papers
- CREST: Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-Shot Learning [48.46511584490582]
Zero-shot learning (ZSL) enables the recognition of novel classes by leveraging semantic knowledge transfer from known to unknown categories.
Real-world challenges such as distribution imbalances and attribute co-occurrence hinder the discernment of local variances in images.
We propose a bidirectional cross-modal ZSL approach CREST to overcome these challenges.
arXiv Detail & Related papers (2024-04-15T10:19:39Z) - High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning [54.86882315023791]
We propose an innovative approach called High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning (HDAFL)
HDAFL utilizes multiple convolutional kernels to automatically learn discriminative regions highly correlated with attributes in images.
We also introduce a Transformer-based attribute discrimination encoder to enhance the discriminative capability among attributes.
arXiv Detail & Related papers (2024-04-07T13:17:47Z) - Hierarchical Visual Primitive Experts for Compositional Zero-Shot
Learning [52.506434446439776]
Compositional zero-shot learning (CZSL) aims to recognize compositions with prior knowledge of known primitives (attribute and object)
We propose a simple and scalable framework called Composition Transformer (CoT) to address these issues.
Our method achieves SoTA performance on several benchmarks, including MIT-States, C-GQA, and VAW-CZSL.
arXiv Detail & Related papers (2023-08-08T03:24:21Z) - On Implicit Attribute Localization for Generalized Zero-Shot Learning [43.61533666141709]
We show that common ZSL backbones can implicitly localize attributes, yet this property is not exploited.
We then propose SELAR, a simple method that further encourages attribute localization, surprisingly achieving very competitive generalized ZSL (GZSL) performance.
arXiv Detail & Related papers (2021-03-08T12:31:37Z) - Goal-Oriented Gaze Estimation for Zero-Shot Learning [62.52340838817908]
We introduce a novel goal-oriented gaze estimation module (GEM) to improve the discriminative attribute localization.
We aim to predict the actual human gaze location to get the visual attention regions for recognizing a novel object guided by attribute description.
This work implies the promising benefits of collecting human gaze dataset and automatic gaze estimation algorithms on high-level computer vision tasks.
arXiv Detail & Related papers (2021-03-05T02:14:57Z) - OntoZSL: Ontology-enhanced Zero-shot Learning [19.87808305218359]
Key to implementing Zero-shot Learning (ZSL) is to leverage the prior knowledge of classes which builds the semantic relationship between classes.
In this paper, we explore richer and more competitive prior knowledge to model the inter-class relationship for ZSL.
To address the data imbalance between seen classes and unseen classes, we developed a generative ZSL framework with Generative Adversarial Networks (GANs)
arXiv Detail & Related papers (2021-02-15T04:39:58Z) - Isometric Propagation Network for Generalized Zero-shot Learning [72.02404519815663]
A popular strategy is to learn a mapping between the semantic space of class attributes and the visual space of images based on the seen classes and their data.
We propose Isometric propagation Network (IPN), which learns to strengthen the relation between classes within each space and align the class dependency in the two spaces.
IPN achieves state-of-the-art performance on three popular Zero-shot learning benchmarks.
arXiv Detail & Related papers (2021-02-03T12:45:38Z) - Information Bottleneck Constrained Latent Bidirectional Embedding for
Zero-Shot Learning [59.58381904522967]
We propose a novel embedding based generative model with a tight visual-semantic coupling constraint.
We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces.
Our method can be easily extended to transductive ZSL setting by generating labels for unseen images.
arXiv Detail & Related papers (2020-09-16T03:54:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.