FreeCloth: Free-form Generation Enhances Challenging Clothed Human Modeling
- URL: http://arxiv.org/abs/2411.19942v3
- Date: Wed, 09 Apr 2025 12:48:01 GMT
- Title: FreeCloth: Free-form Generation Enhances Challenging Clothed Human Modeling
- Authors: Hang Ye, Xiaoxuan Ma, Hai Ci, Wentao Zhu, Yizhou Wang,
- Abstract summary: FreeCloth is a novel hybrid framework to model challenging humans.<n>We segment the human body into three categories: unclothed, deformed, and generated.<n>FreeCloth achieves state-of-the-art performance with superior visual fidelity and realism.
- Score: 20.33405634831369
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Achieving realistic animated human avatars requires accurate modeling of pose-dependent clothing deformations. Existing learning-based methods heavily rely on the Linear Blend Skinning (LBS) of minimally-clothed human models like SMPL to model deformation. However, they struggle to handle loose clothing, such as long dresses, where the canonicalization process becomes ill-defined when the clothing is far from the body, leading to disjointed and fragmented results. To overcome this limitation, we propose FreeCloth, a novel hybrid framework to model challenging clothed humans. Our core idea is to use dedicated strategies to model different regions, depending on whether they are close to or distant from the body. Specifically, we segment the human body into three categories: unclothed, deformed, and generated. We simply replicate unclothed regions that require no deformation. For deformed regions close to the body, we leverage LBS to handle the deformation. As for the generated regions, which correspond to loose clothing areas, we introduce a novel free-form, part-aware generator to model them, as they are less affected by movements. This free-form generation paradigm brings enhanced flexibility and expressiveness to our hybrid framework, enabling it to capture the intricate geometric details of challenging loose clothing, such as skirts and dresses. Experimental results on the benchmark dataset featuring loose clothing demonstrate that FreeCloth achieves state-of-the-art performance with superior visual fidelity and realism, particularly in the most challenging cases.
Related papers
- Limb-Aware Virtual Try-On Network with Progressive Clothing Warping [64.84181064722084]
Image-based virtual try-on aims to transfer an in-shop clothing image to a person image.
Most existing methods adopt a single global deformation to perform clothing warping directly.
We propose Limb-aware Virtual Try-on Network named PL-VTON, which performs fine-grained clothing warping progressively.
arXiv Detail & Related papers (2025-03-18T09:52:41Z) - PocoLoco: A Point Cloud Diffusion Model of Human Shape in Loose Clothing [97.83361232792214]
PocoLoco is the first template-free, point-based, pose-conditioned generative model for 3D humans in loose clothing.
We formulate avatar clothing deformation as a conditional point-cloud generation task within the denoising diffusion framework.
We release a dataset of two subjects performing various poses in loose clothing with a total of 75K point clouds.
arXiv Detail & Related papers (2024-11-06T20:42:13Z) - Towards High-Quality 3D Motion Transfer with Realistic Apparel Animation [69.36162784152584]
We present a novel method aiming for high-quality motion transfer with realistic apparel animation.
We propose a data-driven pipeline that learns to disentangle body and apparel deformations via two neural deformation modules.
Our method produces results with superior quality for various types of apparel.
arXiv Detail & Related papers (2024-07-15T22:17:35Z) - Neural-ABC: Neural Parametric Models for Articulated Body with Clothes [29.04941764336255]
We introduce Neural-ABC, a novel model that can represent clothed human bodies with disentangled latent spaces for identity, clothing, shape, and pose.
Our model excels at disentangling clothing and identity in different shape and poses while preserving the style of the clothing.
Compared to other state-of-the-art parametric models, Neural-ABC demonstrates powerful advantages in the reconstruction of clothed human bodies.
arXiv Detail & Related papers (2024-04-06T16:29:10Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - A Two-stage Personalized Virtual Try-on Framework with Shape Control and
Texture Guidance [7.302929117437442]
This paper proposes a brand new personalized virtual try-on model (PE-VITON), which uses the two stages (shape control and texture guidance) to decouple the clothing attributes.
The proposed model can effectively solve the problems of weak reduction of clothing folds, poor generation effect under complex human posture, blurred edges of clothing, and unclear texture styles in traditional try-on methods.
arXiv Detail & Related papers (2023-12-24T13:32:55Z) - DLCA-Recon: Dynamic Loose Clothing Avatar Reconstruction from Monocular
Videos [15.449755248457457]
We propose a method named DLCA-Recon to create human avatars from monocular videos.
The distance from loose clothing to the underlying body rapidly changes in every frame when the human freely moves and acts.
Our method can produce superior results for humans with loose clothing compared to the SOTA methods.
arXiv Detail & Related papers (2023-12-19T12:19:20Z) - Garment Recovery with Shape and Deformation Priors [51.41962835642731]
We propose a method that delivers realistic garment models from real-world images, regardless of garment shape or deformation.
Not only does our approach recover the garment geometry accurately, it also yields models that can be directly used by downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-17T07:06:21Z) - CloSET: Modeling Clothed Humans on Continuous Surface with Explicit
Template Decomposition [36.39531876183322]
We propose to decompose explicit garment-related templates and then add pose-dependent wrinkles to them.
To tackle the seam artifact issues in recent state-of-the-art point-based methods, we propose to learn point features on a body surface.
Our approach is validated on two existing datasets and our newly introduced dataset, showing better clothing deformation results in unseen poses.
arXiv Detail & Related papers (2023-04-06T15:50:05Z) - Neural Point-based Shape Modeling of Humans in Challenging Clothing [75.75870953766935]
Parametric 3D body models like SMPL only represent minimally-clothed people and are hard to extend to clothing.
We extend point-based methods with a coarse stage, that replaces canonicalization with a learned pose-independent "coarse shape"
The approach works well for garments that both conform to, and deviate from, the body.
arXiv Detail & Related papers (2022-09-14T17:59:17Z) - Neural-GIF: Neural Generalized Implicit Functions for Animating People
in Clothing [49.32522765356914]
We learn to animate people in clothing as a function of the body pose.
We learn to map every point in the space to a canonical space, where a learned deformation field is applied to model non-rigid effects.
Neural-GIF can be trained on raw 3D scans and reconstructs detailed complex surface geometry and deformations.
arXiv Detail & Related papers (2021-08-19T17:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.