SiCo: A Size-Controllable Virtual Try-On Approach for Informed Decision-Making
- URL: http://arxiv.org/abs/2408.02803v1
- Date: Mon, 5 Aug 2024 19:46:59 GMT
- Title: SiCo: A Size-Controllable Virtual Try-On Approach for Informed Decision-Making
- Authors: Sherry X. Chen, Alex Christopher Lim, Yimeng Liu, Pradeep Sen, Misha Sra,
- Abstract summary: Virtual try-on (VTO) applications aim to improve the online shopping experience by allowing users to preview garments.
Many VTO tools fail to consider the crucial relationship between a garment's size and the user's body size.
We introduce SiCo, an online VTO system, where users can upload images of themselves and visualize how different sizes of clothing would look on their body.
- Score: 12.418243635054536
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Virtual try-on (VTO) applications aim to improve the online shopping experience by allowing users to preview garments, before making purchase decisions. However, many VTO tools fail to consider the crucial relationship between a garment's size and the user's body size, often employing a one-size-fits-all approach when visualizing a clothing item. This results in poor size recommendations and purchase decisions leading to increased return rates. To address this limitation, we introduce SiCo, an online VTO system, where users can upload images of themselves and visualize how different sizes of clothing would look on their body to help make better-informed purchase decisions. Our user study shows SiCo's superiority over baseline VTO. The results indicate that our approach significantly enhances user ability to gauge the appearance of outfits on their bodies and boosts their confidence in selecting clothing sizes that match desired goals. Based on our evaluation, we believe our VTO design has the potential to reduce return rates and enhance the online clothes shopping experience. Our code is available at https://github.com/SherryXTChen/SiCo.
Related papers
- IMAGDressing-v1: Customizable Virtual Dressing [58.44155202253754]
IMAGDressing-v1 is a virtual dressing task that generates freely editable human images with fixed garments and optional conditions.
IMAGDressing-v1 incorporates a garment UNet that captures semantic features from CLIP and texture features from VAE.
We present a hybrid attention module, including a frozen self-attention and a trainable cross-attention, to integrate garment features from the garment UNet into a frozen denoising UNet.
arXiv Detail & Related papers (2024-07-17T16:26:30Z) - MV-VTON: Multi-View Virtual Try-On with Diffusion Models [91.71150387151042]
The goal of image-based virtual try-on is to generate an image of the target person naturally wearing the given clothing.
Existing methods solely focus on the frontal try-on using the frontal clothing.
We introduce Multi-View Virtual Try-ON (MV-VTON), which aims to reconstruct the dressing results from multiple views using the given clothes.
arXiv Detail & Related papers (2024-04-26T12:27:57Z) - ClothFit: Cloth-Human-Attribute Guided Virtual Try-On Network Using 3D
Simulated Dataset [5.260305201345232]
We propose a novel virtual try-on method called ClothFit.
It can predict the draping shape of a garment on a target body based on the actual size of the garment and human attributes.
Our experimental results demonstrate that ClothFit can significantly improve the existing state-of-the-art methods in terms of photo-realistic virtual try-on results.
arXiv Detail & Related papers (2023-06-24T08:57:36Z) - ALiSNet: Accurate and Lightweight Human Segmentation Network for Fashion
E-Commerce [57.876602177247534]
Smartphones provide a convenient way for users to capture images of their body.
We create a new segmentation model by simplifying Semantic FPN with PointRend.
We finetune this model on a high-quality dataset of humans in a restricted set of poses relevant for our application.
arXiv Detail & Related papers (2023-04-15T11:06:32Z) - FitGAN: Fit- and Shape-Realistic Generative Adversarial Networks for
Fashion [5.478764356647437]
We present FitGAN, a generative adversarial model that accounts for garments' entangled size and fit characteristics at scale.
Our model learns disentangled item representations and generates realistic images reflecting the true fit and shape properties of fashion articles.
arXiv Detail & Related papers (2022-06-23T15:10:28Z) - Shape Controllable Virtual Try-on for Underwear Models [0.0]
We propose a Shape Controllable Virtual Try-On Network (SC-VTON) to dress clothing for underwear models.
SC-VTON integrates information of model and clothing to generate warped clothing image.
Our method can generate high-resolution results with detailed textures.
arXiv Detail & Related papers (2021-07-28T04:01:01Z) - SizeFlags: Reducing Size and Fit Related Returns in Fashion E-Commerce [3.324876873771105]
We introduce SizeFlags, a probabilistic Bayesian model based on weakly annotated large-scale data from customers.
We demonstrate the strong impact of the proposed approach in reducing size-related returns in online fashion over 14 countries.
arXiv Detail & Related papers (2021-06-07T11:43:40Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z) - SIZER: A Dataset and Model for Parsing 3D Clothing and Learning Size
Sensitive 3D Clothing [50.63492603374867]
We introduce SizerNet to predict 3D clothing conditioned on human body shape and garment size parameters.
We also introduce Net to infer garment meshes and shape under clothing with personal details in a single pass from an input mesh.
arXiv Detail & Related papers (2020-07-22T18:13:24Z) - Shop The Look: Building a Large Scale Visual Shopping System at
Pinterest [16.132346347702075]
Shop The Look is an online shopping discovery service at Pinterest, leveraging visual search to enable users to find and buy products within an image.
We discuss topics including core technology across object detection and visual embeddings, serving infrastructure for realtime inference, and data labeling methodology for training/evaluation data collection and human evaluation.
The user-facing impacts of our system design choices are measured through offline evaluations, human relevance judgements, and online A/B experiments.
arXiv Detail & Related papers (2020-06-18T21:38:07Z) - Learning Diverse Fashion Collocation by Neural Graph Filtering [78.9188246136867]
We propose a novel fashion collocation framework, Neural Graph Filtering, that models a flexible set of fashion items via a graph neural network.
By applying symmetric operations on the edge vectors, this framework allows varying numbers of inputs/outputs and is invariant to their ordering.
We evaluate the proposed approach on three popular benchmarks, the Polyvore dataset, the Polyvore-D dataset, and our reorganized Amazon Fashion dataset.
arXiv Detail & Related papers (2020-03-11T16:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.