Dress Well via Fashion Cognitive Learning
- URL: http://arxiv.org/abs/2208.00639v1
- Date: Mon, 1 Aug 2022 06:52:37 GMT
- Title: Dress Well via Fashion Cognitive Learning
- Authors: Kaicheng Pang, Xingxing Zou, Waikeung Wong
- Abstract summary: We propose a Fashion Cognitive Network (FCN) to learn the relationships among visual-semantic embedding of outfit composition and appearance features of individuals.
FCN contains two submodules, namely outfit encoder and Multi-label Graph Neural Network (ML-GCN)
- Score: 18.867513936553195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fashion compatibility models enable online retailers to easily obtain a large
number of outfit compositions with good quality. However, effective fashion
recommendation demands precise service for each customer with a deeper
cognition of fashion. In this paper, we conduct the first study on fashion
cognitive learning, which is fashion recommendations conditioned on personal
physical information. To this end, we propose a Fashion Cognitive Network (FCN)
to learn the relationships among visual-semantic embedding of outfit
composition and appearance features of individuals. FCN contains two
submodules, namely outfit encoder and Multi-label Graph Neural Network
(ML-GCN). The outfit encoder uses a convolutional layer to encode an outfit
into an outfit embedding. The latter module learns label classifiers via
stacked GCN. We conducted extensive experiments on the newly collected O4U
dataset, and the results provide strong qualitative and quantitative evidence
that our framework outperforms alternative methods.
Related papers
- COutfitGAN: Learning to Synthesize Compatible Outfits Supervised by Silhouette Masks and Fashion Styles [23.301719420997927]
We propose the new task of generating complementary and compatible fashion items based on an arbitrary number of given fashion items.
In particular, given some fashion items that can make up an outfit, the aim of this paper is to synthesize photo-realistic images of other, complementary, fashion items that are compatible with the given ones.
To achieve this, we propose an outfit generation framework, referred to as COutfitGAN, which includes a pyramid style extractor, an outfit generator, a UNet-based real/fake discriminator, and a collocation discriminator.
arXiv Detail & Related papers (2025-02-12T03:32:28Z) - Learning to Synthesize Compatible Fashion Items Using Semantic Alignment and Collocation Classification: An Outfit Generation Framework [59.09707044733695]
We propose a novel outfit generation framework, i.e., OutfitGAN, with the aim of synthesizing an entire outfit.
OutfitGAN includes a semantic alignment module, which is responsible for characterizing the mapping correspondence between the existing fashion items and the synthesized ones.
In order to evaluate the performance of our proposed models, we built a large-scale dataset consisting of 20,000 fashion outfits.
arXiv Detail & Related papers (2025-02-05T12:13:53Z) - Towards Intelligent Design: A Self-driven Framework for Collocated Clothing Synthesis Leveraging Fashion Styles and Textures [17.35328594773488]
Collocated clothing synthesis (CCS) has emerged as a pivotal topic in fashion technology.
Previous investigations have relied on using paired outfits, such as a pair of matching upper and lower clothing, to train a generative model for achieving this task.
We introduce a new self-driven framework, named style- and texture-guided generative network (ST-Net), to synthesize collocated clothing without the necessity for paired outfits.
arXiv Detail & Related papers (2025-01-23T05:46:08Z) - FaD-VLP: Fashion Vision-and-Language Pre-training towards Unified
Retrieval and Captioning [66.38951790650887]
Multimodal tasks in the fashion domain have significant potential for e-commerce.
We propose a novel fashion-specific pre-training framework based on weakly-supervised triplets constructed from fashion image-text pairs.
We show the triplet-based tasks are an effective addition to standard multimodal pre-training tasks.
arXiv Detail & Related papers (2022-10-26T21:01:19Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Fashionformer: A simple, Effective and Unified Baseline for Human
Fashion Segmentation and Recognition [80.74495836502919]
In this work, we focus on joint human fashion segmentation and attribute recognition.
We introduce the object query for segmentation and the attribute query for attribute prediction.
For attribute stream, we design a novel Multi-Layer Rendering module to explore more fine-grained features.
arXiv Detail & Related papers (2022-04-10T11:11:10Z) - Personalized Fashion Recommendation from Personal Social Media Data: An
Item-to-Set Metric Learning Approach [71.63618051547144]
We study the problem of personalized fashion recommendation from social media data.
We present an item-to-set metric learning framework that learns to compute the similarity between a set of historical fashion items of a user to a new fashion item.
To validate the effectiveness of our approach, we collect a real-world social media dataset.
arXiv Detail & Related papers (2020-05-25T23:24:24Z) - Fashion Recommendation and Compatibility Prediction Using Relational
Network [18.13692056232815]
We develop a Relation Network (RN) to develop new compatibility learning models.
FashionRN learns the compatibility of an entire outfit, with an arbitrary number of items, in an arbitrary order.
We evaluate our model using a large dataset of 49,740 outfits that we collected from Polyvore website.
arXiv Detail & Related papers (2020-05-13T21:00:54Z) - Learning Diverse Fashion Collocation by Neural Graph Filtering [78.9188246136867]
We propose a novel fashion collocation framework, Neural Graph Filtering, that models a flexible set of fashion items via a graph neural network.
By applying symmetric operations on the edge vectors, this framework allows varying numbers of inputs/outputs and is invariant to their ordering.
We evaluate the proposed approach on three popular benchmarks, the Polyvore dataset, the Polyvore-D dataset, and our reorganized Amazon Fashion dataset.
arXiv Detail & Related papers (2020-03-11T16:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.