Hierarchical Feature Embedding for Attribute Recognition
- URL: http://arxiv.org/abs/2005.11576v1
- Date: Sat, 23 May 2020 17:52:41 GMT
- Title: Hierarchical Feature Embedding for Attribute Recognition
- Authors: Jie Yang, Jiarou Fan, Yiru Wang, Yige Wang, Weihao Gan, Lin Liu, Wei
Wu
- Abstract summary: We propose a hierarchical feature embedding framework, which learns a fine-grained feature embedding by combining attribute and ID information.
Experiments show that our method achieves the state-of-the-art results on two pedestrian attribute datasets and a facial attribute dataset.
- Score: 26.79901907956084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Attribute recognition is a crucial but challenging task due to viewpoint
changes, illumination variations and appearance diversities, etc. Most of
previous work only consider the attribute-level feature embedding, which might
perform poorly in complicated heterogeneous conditions. To address this
problem, we propose a hierarchical feature embedding (HFE) framework, which
learns a fine-grained feature embedding by combining attribute and ID
information. In HFE, we maintain the inter-class and intra-class feature
embedding simultaneously. Not only samples with the same attribute but also
samples with the same ID are gathered more closely, which could restrict the
feature embedding of visually hard samples with regard to attributes and
improve the robustness to variant conditions. We establish this hierarchical
structure by utilizing HFE loss consisted of attribute-level and ID-level
constraints. We also introduce an absolute boundary regularization and a
dynamic loss weight as supplementary components to help build up the feature
embedding. Experiments show that our method achieves the state-of-the-art
results on two pedestrian attribute datasets and a facial attribute dataset.
Related papers
- Exploring Fine-Grained Representation and Recomposition for Cloth-Changing Person Re-Identification [78.52704557647438]
We propose a novel FIne-grained Representation and Recomposition (FIRe$2$) framework to tackle both limitations without any auxiliary annotation or data.
Experiments demonstrate that FIRe$2$ can achieve state-of-the-art performance on five widely-used cloth-changing person Re-ID benchmarks.
arXiv Detail & Related papers (2023-08-21T12:59:48Z) - A Solution to Co-occurrence Bias: Attributes Disentanglement via Mutual
Information Minimization for Pedestrian Attribute Recognition [10.821982414387525]
We show that current methods can actually suffer in generalizing such fitted attributes interdependencies onto scenes or identities off the dataset distribution.
To render models robust in realistic scenes, we propose the attributes-disentangled feature learning to ensure the recognition of an attribute not inferring on the existence of others.
arXiv Detail & Related papers (2023-07-28T01:34:55Z) - Learning Conditional Attributes for Compositional Zero-Shot Learning [78.24309446833398]
Compositional Zero-Shot Learning (CZSL) aims to train models to recognize novel compositional concepts.
One of the challenges is to model attributes interacted with different objects, e.g., the attribute wet" in wet apple" and wet cat" is different.
We argue that attributes are conditioned on the recognized object and input image and explore learning conditional attribute embeddings.
arXiv Detail & Related papers (2023-05-29T08:04:05Z) - ASD: Towards Attribute Spatial Decomposition for Prior-Free Facial
Attribute Recognition [11.757112726108822]
Representing the spatial properties of facial attributes is a vital challenge for facial attribute recognition (FAR)
Recent advances have achieved the reliable performances for FAR, benefiting from the description of spatial properties via extra prior information.
We propose a prior-free method for attribute spatial decomposition (ASD), mitigating the spatial ambiguity of facial attributes without any extra prior information.
arXiv Detail & Related papers (2022-10-25T02:25:05Z) - TransFA: Transformer-based Representation for Face Attribute Evaluation [87.09529826340304]
We propose a novel textbftransformer-based representation for textbfattribute evaluation method (textbfTransFA)
The proposed TransFA achieves superior performances compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-07-12T10:58:06Z) - Disentangled Face Attribute Editing via Instance-Aware Latent Space
Search [30.17338705964925]
A rich set of semantic directions exist in the latent space of Generative Adversarial Networks (GANs)
Existing methods may suffer poor attribute variation disentanglement, leading to unwanted change of other attributes when altering the desired one.
We propose a novel framework (IALS) that performs Instance-Aware Latent-Space Search to find semantic directions for disentangled attribute editing.
arXiv Detail & Related papers (2021-05-26T16:19:08Z) - AttributeNet: Attribute Enhanced Vehicle Re-Identification [70.89289512099242]
We introduce AttributeNet (ANet) that jointly extracts identity-relevant features and attribute features.
We enable the interaction by distilling the ReID-helpful attribute feature and adding it into the general ReID feature to increase the discrimination power.
We validate the effectiveness of our framework on three challenging datasets.
arXiv Detail & Related papers (2021-02-07T19:51:02Z) - Learning to Infer Unseen Attribute-Object Compositions [55.58107964602103]
A graph-based model is proposed that can flexibly recognize both single- and multi-attribute-object compositions.
We build a large-scale Multi-Attribute dataset with 116,099 images and 8,030 composition categories.
arXiv Detail & Related papers (2020-10-27T14:57:35Z) - Prominent Attribute Modification using Attribute Dependent Generative
Adversarial Network [4.654937118111992]
The proposed approach is based on two generators and two discriminators that utilize the binary as well as the real representation of the attributes.
Experiments on the CelebA dataset show that our method effectively performs the multiple attribute editing with preserving other facial details intactly.
arXiv Detail & Related papers (2020-04-24T13:38:05Z) - Attribute Mix: Semantic Data Augmentation for Fine Grained Recognition [102.45926816660665]
We propose Attribute Mix, a data augmentation strategy at attribute level to expand the fine-grained samples.
The principle lies in that attribute features are shared among fine-grained sub-categories, and can be seamlessly transferred among images.
arXiv Detail & Related papers (2020-04-06T14:06:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.