SizeGAN: Improving Size Representation in Clothing Catalogs
- URL: http://arxiv.org/abs/2211.02892v2
- Date: Mon, 26 Jun 2023 18:41:15 GMT
- Title: SizeGAN: Improving Size Representation in Clothing Catalogs
- Authors: Kathleen M. Lewis and John Guttag
- Abstract summary: We present the first method for generating images of garments in a new target size to tackle the size under-representation problem.
Our primary technical contribution is a conditional generative adversarial network that learns deformation fields at multiple resolutions to realistically change the size of models and garments.
Results from our two user studies show SizeGAN outperforms alternative methods along three dimensions -- realism, garment faithfulness, and size -- which are all important for real world use.
- Score: 2.9008108937701333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online clothing catalogs lack diversity in body shape and garment size.
Brands commonly display their garments on models of one or two sizes, rarely
including plus-size models. To our knowledge, our paper presents the first
method for generating images of garments and models in a new target size to
tackle the size under-representation problem. Our primary technical
contribution is a conditional generative adversarial network that learns
deformation fields at multiple resolutions to realistically change the size of
models and garments. Results from our two user studies show SizeGAN outperforms
alternative methods along three dimensions -- realism, garment faithfulness,
and size -- which are all important for real world use.
Related papers
- DreamFit: Garment-Centric Human Generation via a Lightweight Anything-Dressing Encoder [51.09561183696647]
Diffusion models for garment-centric human generation from text or image prompts have garnered emerging attention.
We propose DreamFit, which incorporates a lightweight Anything-Dressing specifically tailored for the garment-centric human generation.
Our model generalizes surprisingly well to a wide range of (non-)garments, creative styles, and prompt instructions, consistently delivering high-quality results.
arXiv Detail & Related papers (2024-12-23T15:21:28Z) - Size-Variable Virtual Try-On with Physical Clothes Size [13.790737653304088]
This paper addresses a new virtual try-on problem of fitting any size of clothes to a reference person in the image domain.
Our method achieves size-variable virtual try-on in which the image size of the try-on clothes is changed depending on this relative relationship of the physical sizes.
arXiv Detail & Related papers (2024-12-09T04:40:55Z) - PocoLoco: A Point Cloud Diffusion Model of Human Shape in Loose Clothing [97.83361232792214]
PocoLoco is the first template-free, point-based, pose-conditioned generative model for 3D humans in loose clothing.
We formulate avatar clothing deformation as a conditional point-cloud generation task within the denoising diffusion framework.
We release a dataset of two subjects performing various poses in loose clothing with a total of 75K point clouds.
arXiv Detail & Related papers (2024-11-06T20:42:13Z) - Garment Recovery with Shape and Deformation Priors [51.41962835642731]
We propose a method that delivers realistic garment models from real-world images, regardless of garment shape or deformation.
Not only does our approach recover the garment geometry accurately, it also yields models that can be directly used by downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-17T07:06:21Z) - FitGAN: Fit- and Shape-Realistic Generative Adversarial Networks for
Fashion [5.478764356647437]
We present FitGAN, a generative adversarial model that accounts for garments' entangled size and fit characteristics at scale.
Our model learns disentangled item representations and generates realistic images reflecting the true fit and shape properties of fashion articles.
arXiv Detail & Related papers (2022-06-23T15:10:28Z) - SMPLicit: Topology-aware Generative Model for Clothed People [65.84665248796615]
We introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry.
In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people.
arXiv Detail & Related papers (2021-03-11T18:57:03Z) - SIZER: A Dataset and Model for Parsing 3D Clothing and Learning Size
Sensitive 3D Clothing [50.63492603374867]
We introduce SizerNet to predict 3D clothing conditioned on human body shape and garment size parameters.
We also introduce Net to infer garment meshes and shape under clothing with personal details in a single pass from an input mesh.
arXiv Detail & Related papers (2020-07-22T18:13:24Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z) - GarmentGAN: Photo-realistic Adversarial Fashion Transfer [0.0]
GarmentGAN performs image-based garment transfer through generative adversarial methods.
The framework allows users to virtually try-on items before purchase and generalizes to various apparel types.
arXiv Detail & Related papers (2020-03-04T05:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.