Fine-Grained Fashion Similarity Learning by Attribute-Specific Embedding
Network
- URL: http://arxiv.org/abs/2002.02814v1
- Date: Fri, 7 Feb 2020 14:42:26 GMT
- Title: Fine-Grained Fashion Similarity Learning by Attribute-Specific Embedding
Network
- Authors: Zhe Ma, Jianfeng Dong, Yao Zhang, Zhongzi Long, Yuan He, Hui Xue,
Shouling Ji
- Abstract summary: We propose an Attribute-Specific Embedding Network (ASEN) to jointly learn multiple attribute-specific embeddings in an end-to-end manner.
ASEN is able to locate the related regions and capture the essential patterns under the guidance of the specified attribute.
Experiments on four fashion-related datasets show the effectiveness of ASEN for fine-grained fashion similarity learning.
- Score: 59.479783847922135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper strives to learn fine-grained fashion similarity. In this
similarity paradigm, one should pay more attention to the similarity in terms
of a specific design/attribute among fashion items, which has potential values
in many fashion related applications such as fashion copyright protection. To
this end, we propose an Attribute-Specific Embedding Network (ASEN) to jointly
learn multiple attribute-specific embeddings in an end-to-end manner, thus
measure the fine-grained similarity in the corresponding space. With two
attention modules, i.e., Attribute-aware Spatial Attention and Attribute-aware
Channel Attention, ASEN is able to locate the related regions and capture the
essential patterns under the guidance of the specified attribute, thus make the
learned attribute-specific embeddings better reflect the fine-grained
similarity. Extensive experiments on four fashion-related datasets show the
effectiveness of ASEN for fine-grained fashion similarity learning and its
potential for fashion reranking.
Related papers
- Attribute-Aware Implicit Modality Alignment for Text Attribute Person Search [19.610244285078483]
We propose an Attribute-Aware Implicit Modality Alignment (AIMA) framework to learn the correspondence of local representations between textual attributes and images.
We show that our proposed method significantly surpasses the current state-of-the-art methods.
arXiv Detail & Related papers (2024-06-06T03:34:42Z) - Dual Relation Mining Network for Zero-Shot Learning [48.89161627050706]
We propose a Dual Relation Mining Network (DRMN) to enable effective visual-semantic interactions and learn semantic relationship among attributes for knowledge transfer.
Specifically, we introduce a Dual Attention Block (DAB) for visual-semantic relationship mining, which enriches visual information by multi-level feature fusion.
For semantic relationship modeling, we utilize a Semantic Interaction Transformer (SIT) to enhance the generalization of attribute representations among images.
arXiv Detail & Related papers (2024-05-06T16:31:19Z) - Exploring Fine-Grained Representation and Recomposition for Cloth-Changing Person Re-Identification [78.52704557647438]
We propose a novel FIne-grained Representation and Recomposition (FIRe$2$) framework to tackle both limitations without any auxiliary annotation or data.
Experiments demonstrate that FIRe$2$ can achieve state-of-the-art performance on five widely-used cloth-changing person Re-ID benchmarks.
arXiv Detail & Related papers (2023-08-21T12:59:48Z) - FashionSearchNet-v2: Learning Attribute Representations with
Localization for Image Retrieval with Attribute Manipulation [22.691709684780292]
The proposed FashionSearchNet-v2 architecture is able to learn attribute specific representations by leveraging on its weakly-supervised localization module.
The network is jointly trained with the combination of attribute classification and triplet ranking loss to estimate local representations.
Experiments performed on several datasets that are rich in terms of the number of attributes show that FashionSearchNet-v2 outperforms the other state-of-the-art attribute manipulation techniques.
arXiv Detail & Related papers (2021-11-28T13:50:20Z) - Attribute-aware Explainable Complementary Clothing Recommendation [37.30129304097086]
This work aims to tackle the explainability challenge in fashion recommendation tasks by proposing a novel Attribute-aware Fashion Recommender (AFRec)
AFRec recommender assesses the outfit compatibility by explicitly leveraging the extracted attribute-level representations from each item's visual feature.
The attributes serve as the bridge between two fashion items, where we quantify the affinity of a pair of items through the learned compatibility between their attributes.
arXiv Detail & Related papers (2021-07-04T14:56:07Z) - AdaTag: Multi-Attribute Value Extraction from Product Profiles with
Adaptive Decoding [55.89773725577615]
We present AdaTag, which uses adaptive decoding to handle attribute extraction.
Our experiments on a real-world e-Commerce dataset show marked improvements over previous methods.
arXiv Detail & Related papers (2021-06-04T07:54:11Z) - Effectively Leveraging Attributes for Visual Similarity [52.2646549020835]
We propose the Pairwise Attribute-informed similarity Network (PAN), which breaks similarity learning into capturing similarity conditions and relevance scores from a joint representation of two images.
PAN obtains a 4-9% improvement on compatibility prediction between clothing items on Polyvore Outfits, a 5% gain on few shot classification of images using Caltech-UCSD Birds (CUB), and over 1% boost to Recall@1 on In-Shop Clothes Retrieval.
arXiv Detail & Related papers (2021-05-04T18:28:35Z) - Fine-Grained Fashion Similarity Prediction by Attribute-Specific
Embedding Learning [71.74073012364326]
We propose an Attribute-Specific Embedding Network (ASEN) to jointly learn multiple attribute-specific embeddings.
The proposed ASEN is comprised of a global branch and a local branch.
Experiments on three fashion-related datasets, i.e., FashionAI, DARN, and DeepFashion, show the effectiveness of ASEN for fine-grained fashion similarity prediction.
arXiv Detail & Related papers (2021-04-06T11:26:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.