FitGAN: Fit- and Shape-Realistic Generative Adversarial Networks for
Fashion
- URL: http://arxiv.org/abs/2206.11768v1
- Date: Thu, 23 Jun 2022 15:10:28 GMT
- Title: FitGAN: Fit- and Shape-Realistic Generative Adversarial Networks for
Fashion
- Authors: Sonia Pecenakova, Nour Karessli, Reza Shirvany
- Abstract summary: We present FitGAN, a generative adversarial model that accounts for garments' entangled size and fit characteristics at scale.
Our model learns disentangled item representations and generates realistic images reflecting the true fit and shape properties of fashion articles.
- Score: 5.478764356647437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Amidst the rapid growth of fashion e-commerce, remote fitting of fashion
articles remains a complex and challenging problem and a main driver of
customers' frustration. Despite the recent advances in 3D virtual try-on
solutions, such approaches still remain limited to a very narrow - if not only
a handful - selection of articles, and often for only one size of those fashion
items. Other state-of-the-art approaches that aim to support customers find
what fits them online mostly require a high level of customer engagement and
privacy-sensitive data (such as height, weight, age, gender, belly shape,
etc.), or alternatively need images of customers' bodies in tight clothing.
They also often lack the ability to produce fit and shape aware visual guidance
at scale, coming up short by simply advising which size to order that would
best match a customer's physical body attributes, without providing any
information on how the garment may fit and look. Contributing towards taking a
leap forward and surpassing the limitations of current approaches, we present
FitGAN, a generative adversarial model that explicitly accounts for garments'
entangled size and fit characteristics of online fashion at scale. Conditioned
on the fit and shape of the articles, our model learns disentangled item
representations and generates realistic images reflecting the true fit and
shape properties of fashion articles. Through experiments on real world data at
scale, we demonstrate how our approach is capable of synthesizing visually
realistic and diverse fits of fashion items and explore its ability to control
fit and shape of images for thousands of online garments.
Related papers
- AnyFit: Controllable Virtual Try-on for Any Combination of Attire Across Any Scenario [50.62711489896909]
AnyFit surpasses all baselines on high-resolution benchmarks and real-world data by a large gap.
AnyFit's impressive performance on high-fidelity virtual try-ons in any scenario from any image, paves a new path for future research within the fashion community.
arXiv Detail & Related papers (2024-05-28T13:33:08Z) - DL-EWF: Deep Learning Empowering Women's Fashion with Grounded-Segment-Anything Segmentation for Body Shape Classification [0.0]
One of the most pressing challenges in the fashion industry is the mismatch between body shapes and the garments of individuals they purchase.
Traditional methods for determining human body shape are limited due to their low accuracy, high costs, and time-consuming nature.
New approaches, utilizing digital imaging and deep neural networks (DNN), have been introduced to identify human body shape.
arXiv Detail & Related papers (2024-04-07T09:17:00Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - Garment Recovery with Shape and Deformation Priors [51.41962835642731]
We propose a method that delivers realistic garment models from real-world images, regardless of garment shape or deformation.
Not only does our approach recover the garment geometry accurately, it also yields models that can be directly used by downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-17T07:06:21Z) - ClothFit: Cloth-Human-Attribute Guided Virtual Try-On Network Using 3D
Simulated Dataset [5.260305201345232]
We propose a novel virtual try-on method called ClothFit.
It can predict the draping shape of a garment on a target body based on the actual size of the garment and human attributes.
Our experimental results demonstrate that ClothFit can significantly improve the existing state-of-the-art methods in terms of photo-realistic virtual try-on results.
arXiv Detail & Related papers (2023-06-24T08:57:36Z) - ALiSNet: Accurate and Lightweight Human Segmentation Network for Fashion
E-Commerce [57.876602177247534]
Smartphones provide a convenient way for users to capture images of their body.
We create a new segmentation model by simplifying Semantic FPN with PointRend.
We finetune this model on a high-quality dataset of humans in a restricted set of poses relevant for our application.
arXiv Detail & Related papers (2023-04-15T11:06:32Z) - SizeGAN: Improving Size Representation in Clothing Catalogs [2.9008108937701333]
We present the first method for generating images of garments in a new target size to tackle the size under-representation problem.
Our primary technical contribution is a conditional generative adversarial network that learns deformation fields at multiple resolutions to realistically change the size of models and garments.
Results from our two user studies show SizeGAN outperforms alternative methods along three dimensions -- realism, garment faithfulness, and size -- which are all important for real world use.
arXiv Detail & Related papers (2022-11-05T12:20:01Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Shape Controllable Virtual Try-on for Underwear Models [0.0]
We propose a Shape Controllable Virtual Try-On Network (SC-VTON) to dress clothing for underwear models.
SC-VTON integrates information of model and clothing to generate warped clothing image.
Our method can generate high-resolution results with detailed textures.
arXiv Detail & Related papers (2021-07-28T04:01:01Z) - SizeFlags: Reducing Size and Fit Related Returns in Fashion E-Commerce [3.324876873771105]
We introduce SizeFlags, a probabilistic Bayesian model based on weakly annotated large-scale data from customers.
We demonstrate the strong impact of the proposed approach in reducing size-related returns in online fashion over 14 countries.
arXiv Detail & Related papers (2021-06-07T11:43:40Z) - SMPLicit: Topology-aware Generative Model for Clothed People [65.84665248796615]
We introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry.
In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people.
arXiv Detail & Related papers (2021-03-11T18:57:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.