LUIVITON: Learned Universal Interoperable VIrtual Try-ON
- URL: http://arxiv.org/abs/2509.05030v1
- Date: Fri, 05 Sep 2025 11:40:44 GMT
- Title: LUIVITON: Learned Universal Interoperable VIrtual Try-ON
- Authors: Cong Cao, Xianhang Cheng, Jingyuan Liu, Yujian Zheng, Zhenhui Lin, Meriem Chkir, Hao Li,
- Abstract summary: We present LUIVITON, an end-to-end system for fully automated virtual try-on.<n>It is capable of draping complex, multi-layer clothing onto diverse and arbitrarily posed humanoid characters.<n>Our method can handle complex geometries, non-manifold meshes, and generalizes effectively to a wide range of humanoid characters.
- Score: 12.461905938574843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present LUIVITON, an end-to-end system for fully automated virtual try-on, capable of draping complex, multi-layer clothing onto diverse and arbitrarily posed humanoid characters. To address the challenge of aligning complex garments with arbitrary and highly diverse body shapes, we use SMPL as a proxy representation and separate the clothing-to-body draping problem into two correspondence tasks: 1) clothing-to-SMPL and 2) body-to-SMPL correspondence, where each has its unique challenges. While we address the clothing-to-SMPL fitting problem using a geometric learning-based approach for partial-to-complete shape correspondence prediction, we introduce a diffusion model-based approach for body-to-SMPL correspondence using multi-view consistent appearance features and a pre-trained 2D foundation model. Our method can handle complex geometries, non-manifold meshes, and generalizes effectively to a wide range of humanoid characters -- including humans, robots, cartoon subjects, creatures, and aliens, while maintaining computational efficiency for practical adoption. In addition to offering a fully automatic fitting solution, LUIVITON supports fast customization of clothing size, allowing users to adjust clothing sizes and material properties after they have been draped. We show that our system can produce high-quality 3D clothing fittings without any human labor, even when 2D clothing sewing patterns are not available.
Related papers
- ToMiE: Towards Explicit Exoskeleton for the Reconstruction of Complicated 3D Human Avatars [41.23897822168498]
We propose a growth strategy that enables the joint tree of the skeleton to expand adaptively.<n>Specifically, our method, called ToMiE, consists of parent joints localization and external joints optimization.<n>ToMiE manages to outperform other methods across various cases with hand-held objects and loose-fitting clothing, not only in rendering quality but also by offering free animation of grown joints.
arXiv Detail & Related papers (2024-10-10T16:25:52Z) - GarmentCodeData: A Dataset of 3D Made-to-Measure Garments With Sewing Patterns [18.513707884523072]
We present the first large-scale synthetic dataset of 3D made-to-measure garments with sewing patterns.
GarmentCodeData contains 115,000 data points that cover a variety of designs in many common garment categories.
We propose an automatic, open-source 3D garment draping pipeline based on a fast XPBD simulator.
arXiv Detail & Related papers (2024-05-27T19:14:46Z) - UniFolding: Towards Sample-efficient, Scalable, and Generalizable
Robotic Garment Folding [53.38503172679482]
UniFolding is a sample-efficient, scalable, and generalizable robotic system for unfolding and folding garments.
UniFolding employs the proposed UFONet neural network to integrate unfolding and folding decisions into a single policy model.
The system is tested on two garment types: long-sleeve and short-sleeve shirts.
arXiv Detail & Related papers (2023-11-02T14:25:10Z) - ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns [57.176642106425895]
We introduce a garment representation model that addresses limitations of current approaches.
It is faster and yields higher quality reconstructions than purely implicit surface representations.
It supports rapid editing of garment shapes and texture by modifying individual 2D panels.
arXiv Detail & Related papers (2023-05-23T14:23:48Z) - Towards Scalable Unpaired Virtual Try-On via Patch-Routed
Spatially-Adaptive GAN [66.3650689395967]
We propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on.
To disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module.
arXiv Detail & Related papers (2021-11-20T08:36:12Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z) - SMPLicit: Topology-aware Generative Model for Clothed People [65.84665248796615]
We introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry.
In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people.
arXiv Detail & Related papers (2021-03-11T18:57:03Z) - DeePSD: Automatic Deep Skinning And Pose Space Deformation For 3D
Garment Animation [36.853993692722035]
We present a novel solution to the garment animation problem through deep learning.
Our contribution allows animating any template outfit with arbitrary topology and geometric complexity.
arXiv Detail & Related papers (2020-09-06T11:52:17Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.