SiCo: An Interactive Size-Controllable Virtual Try-On Approach for Informed Decision-Making
- URL: http://arxiv.org/abs/2408.02803v2
- Date: Sun, 18 May 2025 03:27:52 GMT
- Title: SiCo: An Interactive Size-Controllable Virtual Try-On Approach for Informed Decision-Making
- Authors: Sherry X. Chen, Alex Christopher Lim, Yimeng Liu, Pradeep Sen, Misha Sra,
- Abstract summary: Virtual try-on (VTO) applications aim to replicate the in-store shopping experience.<n>Many existing tools adopt a one-size-fits-all approach when visualizing clothing items.<n>We introduce SiCo, a new online VTO system that allows users to upload images of themselves and interact with garments by visualizing how different sizes would fit their bodies.
- Score: 12.418243635054536
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Virtual try-on (VTO) applications aim to replicate the in-store shopping experience and enhance online shopping by enabling users to interact with garments. However, many existing tools adopt a one-size-fits-all approach when visualizing clothing items. This approach limits user interaction with garments, particularly regarding size and fit adjustments, and fails to provide direct insights for size recommendations. As a result, these limitations contribute to high return rates in online shopping. To address this, we introduce SiCo, a new online VTO system that allows users to upload images of themselves and interact with garments by visualizing how different sizes would fit their bodies. Our user study demonstrates that our approach significantly improves users' ability to assess how outfits will appear on their bodies and increases their confidence in selecting clothing sizes that align with their preferences. Based on our evaluation, we believe that SiCo has the potential to reduce return rates and transform the online clothing shopping experience.
Related papers
- Interactive Visualization Recommendation with Hier-SUCB [52.11209329270573]
We propose an interactive personalized visualization recommendation (PVisRec) system that learns on user feedback from previous interactions.<n>For more interactive and accurate recommendations, we propose Hier-SUCB, a contextual semi-bandit in the PVisRec setting.
arXiv Detail & Related papers (2025-02-05T17:14:45Z) - IMAGDressing-v1: Customizable Virtual Dressing [58.44155202253754]
IMAGDressing-v1 is a virtual dressing task that generates freely editable human images with fixed garments and optional conditions.
IMAGDressing-v1 incorporates a garment UNet that captures semantic features from CLIP and texture features from VAE.
We present a hybrid attention module, including a frozen self-attention and a trainable cross-attention, to integrate garment features from the garment UNet into a frozen denoising UNet.
arXiv Detail & Related papers (2024-07-17T16:26:30Z) - MV-VTON: Multi-View Virtual Try-On with Diffusion Models [91.71150387151042]
The goal of image-based virtual try-on is to generate an image of the target person naturally wearing the given clothing.
Existing methods solely focus on the frontal try-on using the frontal clothing.
We introduce Multi-View Virtual Try-ON (MV-VTON), which aims to reconstruct the dressing results from multiple views using the given clothes.
arXiv Detail & Related papers (2024-04-26T12:27:57Z) - ClothFit: Cloth-Human-Attribute Guided Virtual Try-On Network Using 3D
Simulated Dataset [5.260305201345232]
We propose a novel virtual try-on method called ClothFit.
It can predict the draping shape of a garment on a target body based on the actual size of the garment and human attributes.
Our experimental results demonstrate that ClothFit can significantly improve the existing state-of-the-art methods in terms of photo-realistic virtual try-on results.
arXiv Detail & Related papers (2023-06-24T08:57:36Z) - ALiSNet: Accurate and Lightweight Human Segmentation Network for Fashion
E-Commerce [57.876602177247534]
Smartphones provide a convenient way for users to capture images of their body.
We create a new segmentation model by simplifying Semantic FPN with PointRend.
We finetune this model on a high-quality dataset of humans in a restricted set of poses relevant for our application.
arXiv Detail & Related papers (2023-04-15T11:06:32Z) - FitGAN: Fit- and Shape-Realistic Generative Adversarial Networks for
Fashion [5.478764356647437]
We present FitGAN, a generative adversarial model that accounts for garments' entangled size and fit characteristics at scale.
Our model learns disentangled item representations and generates realistic images reflecting the true fit and shape properties of fashion articles.
arXiv Detail & Related papers (2022-06-23T15:10:28Z) - Per Garment Capture and Synthesis for Real-time Virtual Try-on [15.128477359632262]
Existing image-based works try to synthesize a try-on image from a single image of a target garment.
It is difficult to reproduce the change of wrinkles caused by pose and body size change, as well as pulling and stretching of the garment by hand.
We propose an alternative per garment capture and synthesis workflow to handle such rich interactions by training the model with many systematically captured images.
arXiv Detail & Related papers (2021-09-10T03:49:37Z) - Shape Controllable Virtual Try-on for Underwear Models [0.0]
We propose a Shape Controllable Virtual Try-On Network (SC-VTON) to dress clothing for underwear models.
SC-VTON integrates information of model and clothing to generate warped clothing image.
Our method can generate high-resolution results with detailed textures.
arXiv Detail & Related papers (2021-07-28T04:01:01Z) - SizeFlags: Reducing Size and Fit Related Returns in Fashion E-Commerce [3.324876873771105]
We introduce SizeFlags, a probabilistic Bayesian model based on weakly annotated large-scale data from customers.
We demonstrate the strong impact of the proposed approach in reducing size-related returns in online fashion over 14 countries.
arXiv Detail & Related papers (2021-06-07T11:43:40Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z) - SIZER: A Dataset and Model for Parsing 3D Clothing and Learning Size
Sensitive 3D Clothing [50.63492603374867]
We introduce SizerNet to predict 3D clothing conditioned on human body shape and garment size parameters.
We also introduce Net to infer garment meshes and shape under clothing with personal details in a single pass from an input mesh.
arXiv Detail & Related papers (2020-07-22T18:13:24Z) - Shop The Look: Building a Large Scale Visual Shopping System at
Pinterest [16.132346347702075]
Shop The Look is an online shopping discovery service at Pinterest, leveraging visual search to enable users to find and buy products within an image.
We discuss topics including core technology across object detection and visual embeddings, serving infrastructure for realtime inference, and data labeling methodology for training/evaluation data collection and human evaluation.
The user-facing impacts of our system design choices are measured through offline evaluations, human relevance judgements, and online A/B experiments.
arXiv Detail & Related papers (2020-06-18T21:38:07Z) - Personalized Fashion Recommendation from Personal Social Media Data: An
Item-to-Set Metric Learning Approach [71.63618051547144]
We study the problem of personalized fashion recommendation from social media data.
We present an item-to-set metric learning framework that learns to compute the similarity between a set of historical fashion items of a user to a new fashion item.
To validate the effectiveness of our approach, we collect a real-world social media dataset.
arXiv Detail & Related papers (2020-05-25T23:24:24Z) - Learning Diverse Fashion Collocation by Neural Graph Filtering [78.9188246136867]
We propose a novel fashion collocation framework, Neural Graph Filtering, that models a flexible set of fashion items via a graph neural network.
By applying symmetric operations on the edge vectors, this framework allows varying numbers of inputs/outputs and is invariant to their ordering.
We evaluate the proposed approach on three popular benchmarks, the Polyvore dataset, the Polyvore-D dataset, and our reorganized Amazon Fashion dataset.
arXiv Detail & Related papers (2020-03-11T16:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.