Towards Intelligent Design: A Self-driven Framework for Collocated Clothing Synthesis Leveraging Fashion Styles and Textures
- URL: http://arxiv.org/abs/2501.13396v1
- Date: Thu, 23 Jan 2025 05:46:08 GMT
- Title: Towards Intelligent Design: A Self-driven Framework for Collocated Clothing Synthesis Leveraging Fashion Styles and Textures
- Authors: Minglong Dong, Dongliang Zhou, Jianghong Ma, Haijun Zhang,
- Abstract summary: Collocated clothing synthesis (CCS) has emerged as a pivotal topic in fashion technology.<n>Previous investigations have relied on using paired outfits, such as a pair of matching upper and lower clothing, to train a generative model for achieving this task.<n>We introduce a new self-driven framework, named style- and texture-guided generative network (ST-Net), to synthesize collocated clothing without the necessity for paired outfits.
- Score: 17.35328594773488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collocated clothing synthesis (CCS) has emerged as a pivotal topic in fashion technology, primarily concerned with the generation of a clothing item that harmoniously matches a given item. However, previous investigations have relied on using paired outfits, such as a pair of matching upper and lower clothing, to train a generative model for achieving this task. This reliance on the expertise of fashion professionals in the construction of such paired outfits has engendered a laborious and time-intensive process. In this paper, we introduce a new self-driven framework, named style- and texture-guided generative network (ST-Net), to synthesize collocated clothing without the necessity for paired outfits, leveraging self-supervised learning. ST-Net is designed to extrapolate fashion compatibility rules from the style and texture attributes of clothing, using a generative adversarial network. To facilitate the training and evaluation of our model, we have constructed a large-scale dataset specifically tailored for unsupervised CCS. Extensive experiments substantiate that our proposed method outperforms the state-of-the-art baselines in terms of both visual authenticity and fashion compatibility.
Related papers
- FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization [12.096130595139364]
We propose a novel framework, FashionDPO, which fine-tunes the fashion outfit generation model using direct preference optimization.
This framework aims to provide a general fine-tuning approach to fashion generative models, without the need to design a task-specific reward function.
Experiments on two datasets, ie iFashion and Polyvore-U, demonstrate the effectiveness of our framework in enhancing the model's ability to align with users' personalized preferences.
arXiv Detail & Related papers (2025-04-17T12:41:41Z) - COutfitGAN: Learning to Synthesize Compatible Outfits Supervised by Silhouette Masks and Fashion Styles [23.301719420997927]
We propose the new task of generating complementary and compatible fashion items based on an arbitrary number of given fashion items.
In particular, given some fashion items that can make up an outfit, the aim of this paper is to synthesize photo-realistic images of other, complementary, fashion items that are compatible with the given ones.
To achieve this, we propose an outfit generation framework, referred to as COutfitGAN, which includes a pyramid style extractor, an outfit generator, a UNet-based real/fake discriminator, and a collocation discriminator.
arXiv Detail & Related papers (2025-02-12T03:32:28Z) - Learning to Synthesize Compatible Fashion Items Using Semantic Alignment and Collocation Classification: An Outfit Generation Framework [59.09707044733695]
We propose a novel outfit generation framework, i.e., OutfitGAN, with the aim of synthesizing an entire outfit.
OutfitGAN includes a semantic alignment module, which is responsible for characterizing the mapping correspondence between the existing fashion items and the synthesized ones.
In order to evaluate the performance of our proposed models, we built a large-scale dataset consisting of 20,000 fashion outfits.
arXiv Detail & Related papers (2025-02-05T12:13:53Z) - BC-GAN: A Generative Adversarial Network for Synthesizing a Batch of Collocated Clothing [17.91576511810969]
Collocated clothing synthesis using generative networks has significant potential economic value to increase revenue in the fashion industry.
We introduce a novel batch clothing generation framework, named BC-GAN, which is able to synthesize multiple visually-collocated clothing images simultaneously.
Our model was examined in a large-scale dataset with compatible outfits constructed by ourselves.
arXiv Detail & Related papers (2025-02-03T05:41:41Z) - AIpparel: A Multimodal Foundation Model for Digital Garments [71.12933771326279]
We introduce AIpparel, a multimodal foundation model for generating and editing sewing patterns.
Our model fine-tunes state-of-the-art large multimodal models on a custom-curated large-scale dataset of over 120,000 unique garments.
We propose a novel tokenization scheme that concisely encodes these complex sewing patterns so that LLMs can learn to predict them efficiently.
arXiv Detail & Related papers (2024-12-05T07:35:19Z) - FashionReGen: LLM-Empowered Fashion Report Generation [61.84580616045145]
We propose an intelligent Fashion Analyzing and Reporting system based on advanced Large Language Models (LLMs)
Specifically, it tries to deliver FashionReGen based on effective catwalk analysis, which is equipped with several key procedures.
It also inspires the explorations of more high-level tasks with industrial significance in other domains.
arXiv Detail & Related papers (2024-03-11T12:29:35Z) - DressCode: Autoregressively Sewing and Generating Garments from Text Guidance [61.48120090970027]
DressCode aims to democratize design for novices and offer immense potential in fashion design, virtual try-on, and digital human creation.
We first introduce SewingGPT, a GPT-based architecture integrating cross-attention with text-conditioned embedding to generate sewing patterns.
We then tailor a pre-trained Stable Diffusion to generate tile-based Physically-based Rendering (PBR) textures for the garments.
arXiv Detail & Related papers (2024-01-29T16:24:21Z) - Transformer-based Graph Neural Networks for Outfit Generation [22.86041284499166]
TGNN exploits multi-headed self attention to capture relations between clothing items in a graph as a message passing step in Convolutional Graph Neural Networks.
We propose a transformer-based architecture, which exploits multi-headed self attention to capture relations between clothing items in a graph as a message passing step in Convolutional Graph Neural Networks.
arXiv Detail & Related papers (2023-04-17T09:18:45Z) - Dress Well via Fashion Cognitive Learning [18.867513936553195]
We propose a Fashion Cognitive Network (FCN) to learn the relationships among visual-semantic embedding of outfit composition and appearance features of individuals.
FCN contains two submodules, namely outfit encoder and Multi-label Graph Neural Network (ML-GCN)
arXiv Detail & Related papers (2022-08-01T06:52:37Z) - VICTOR: Visual Incompatibility Detection with Transformers and
Fashion-specific contrastive pre-training [18.753508811614644]
Visual InCompatibility TransfORmer (VICTOR) is optimized for two tasks: 1) overall compatibility as regression and 2) the detection of mismatching items.
We build upon the Polyvore outfit benchmark to generate partially mismatching outfits, creating a new dataset termed Polyvore-MISFITs.
A series of ablation and comparative analyses show that the proposed architecture can compete and even surpass the current state-of-the-art on Polyvore datasets.
arXiv Detail & Related papers (2022-07-27T11:18:55Z) - An Application to Generate Style Guided Compatible Outfit [16.63265212958939]
We aim to generate outfits guided by styles or themes using a novel style encoder network.
We present an extensive analysis of different aspects of our method through various experiments.
arXiv Detail & Related papers (2022-05-02T05:45:05Z) - Leveraging Multiple Relations for Fashion Trend Forecasting Based on
Social Media [72.06420633156479]
We propose an improved model named Relation Enhanced Attention Recurrent (REAR) network.
Compared to KERN, the REAR model leverages not only the relations among fashion elements but also those among user groups.
To further improve the performance of long-range trend forecasting, the REAR method devises a sliding temporal attention mechanism.
arXiv Detail & Related papers (2021-05-07T14:52:03Z) - Personalized Fashion Recommendation from Personal Social Media Data: An
Item-to-Set Metric Learning Approach [71.63618051547144]
We study the problem of personalized fashion recommendation from social media data.
We present an item-to-set metric learning framework that learns to compute the similarity between a set of historical fashion items of a user to a new fashion item.
To validate the effectiveness of our approach, we collect a real-world social media dataset.
arXiv Detail & Related papers (2020-05-25T23:24:24Z) - Fashion Recommendation and Compatibility Prediction Using Relational
Network [18.13692056232815]
We develop a Relation Network (RN) to develop new compatibility learning models.
FashionRN learns the compatibility of an entire outfit, with an arbitrary number of items, in an arbitrary order.
We evaluate our model using a large dataset of 49,740 outfits that we collected from Polyvore website.
arXiv Detail & Related papers (2020-05-13T21:00:54Z) - Knowledge Enhanced Neural Fashion Trend Forecasting [81.2083786318119]
This work focuses on investigating fine-grained fashion element trends for specific user groups.
We first contribute a large-scale fashion trend dataset (FIT) collected from Instagram with extracted time series fashion element records and user information.
We propose a Knowledge EnhancedRecurrent Network model (KERN) which takes advantage of the capability of deep recurrent neural networks in modeling time-series data.
arXiv Detail & Related papers (2020-05-07T07:42:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.