Learning High-Fidelity Cloth Animation via Skinning-Free Image Transfer
- URL: http://arxiv.org/abs/2512.05593v1
- Date: Fri, 05 Dec 2025 10:28:08 GMT
- Title: Learning High-Fidelity Cloth Animation via Skinning-Free Image Transfer
- Authors: Rong Wang, Wei Mao, Changsheng Lu, Hongdong Li,
- Abstract summary: We present a novel method for generating 3D garment deformations from given body poses.<n>Our method significantly improves animation quality on various garment types and recovers finer wrinkles than state-of-the-art methods.
- Score: 64.49436559408049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel method for generating 3D garment deformations from given body poses, which is key to a wide range of applications, including virtual try-on and extended reality. To simplify the cloth dynamics, existing methods mostly rely on linear blend skinning to obtain low-frequency posed garment shape and only regress high-frequency wrinkles. However, due to the lack of explicit skinning supervision, such skinning-based approach often produces misaligned shapes when posing the garment, consequently corrupts the high-frequency signals and fails to recover high-fidelity wrinkles. To tackle this issue, we propose a skinning-free approach by independently estimating posed (i) vertex position for low-frequency posed garment shape, and (ii) vertex normal for high-frequency local wrinkle details. In this way, each frequency modality can be effectively decoupled and directly supervised by the geometry of the deformed garment. To further improve the visual quality of animation, we propose to encode both vertex attributes as rendered texture images, so that 3D garment deformation can be equivalently achieved via 2D image transfer. This enables us to leverage powerful pretrained image models to recover fine-grained visual details in wrinkles, while maintaining superior scalability for garments of diverse topologies without relying on manual UV partition. Finally, we propose a multimodal fusion to incorporate constraints from both frequency modalities and robustly recover deformed 3D garments from transferred images. Extensive experiments show that our method significantly improves animation quality on various garment types and recovers finer wrinkles than state-of-the-art methods.
Related papers
- Spatio-Temporal Garment Reconstruction Using Diffusion Mapping via Pattern Coordinates [38.93906389023275]
Reconstructing 3D clothed humans from monocular images and videos is a fundamental problem with applications in virtual try-on, avatar creation, and mixed reality.<n>We propose a high-fidelity 3D garment reconstruction from both single images and sequences.<n>The reconstructed garments preserve fine geometric detail while exhibiting realistic dynamic motion, supporting downstream applications such as texture editing, garment Sewing, and animation.
arXiv Detail & Related papers (2026-02-27T14:19:23Z) - DressWild: Feed-Forward Pose-Agnostic Garment Sewing Pattern Generation from In-the-Wild Images [50.11081091174558]
This paper focuses on sewing pattern generation for garment modeling and fabrication applications.<n>We propose DressWild, a novel feed-forward pipeline that reconstructs physics-consistent 2D sewing patterns and the corresponding 3D garments from a single in-the-wild image.
arXiv Detail & Related papers (2026-02-18T14:45:15Z) - Make-It-Poseable: Feed-forward Latent Posing Model for 3D Humanoid Character Animation [74.6792422278706]
We introduce Make-It-Poseable, a novel feed-forward framework that reformulates character posing as a latent-space transformation problem.<n>Our method reconstructs the character in new poses by directly manipulating its latent representation.<n>It also naturally extends to 3D editing applications like part replacement and refinement.
arXiv Detail & Related papers (2025-12-18T17:01:44Z) - TeGA: Texture Space Gaussian Avatars for High-Resolution Dynamic Head Modeling [52.87836237427514]
Photoreal avatars are seen as a key component in emerging applications in telepresence, extended reality, and entertainment.<n>We present a new high-detail 3D head avatar model that improves upon the state of the art.
arXiv Detail & Related papers (2025-05-08T22:10:27Z) - Single View Garment Reconstruction Using Diffusion Mapping Via Pattern Coordinates [45.48311596587306]
Reconstructing 3D clothed humans from images is fundamental to applications like virtual try-on, avatar creation, and mixed reality.<n>We present a novel method for high-fidelity 3D garment reconstruction from single images that bridges 2D and 3D representations.
arXiv Detail & Related papers (2025-04-11T08:39:18Z) - DiffusedWrinkles: A Diffusion-Based Model for Data-Driven Garment Animation [10.9550231281676]
We present a data-driven method for learning to generate animations of 3D garments using a 2D image diffusion model.<n>Our approach is able to synthesize high-quality 3D animations for a wide variety of garments and body shapes.
arXiv Detail & Related papers (2025-03-24T06:08:26Z) - High-Quality Animatable Dynamic Garment Reconstruction from Monocular
Videos [51.8323369577494]
We propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data.
To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network.
We show that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses.
arXiv Detail & Related papers (2023-11-02T13:16:27Z) - MoDA: Modeling Deformable 3D Objects from Casual Videos [84.29654142118018]
We propose neural dual quaternion blend skinning (NeuDBS) to achieve 3D point deformation without skin-collapsing artifacts.
In the endeavor to register 2D pixels across different frames, we establish a correspondence between canonical feature embeddings that encodes 3D points within the canonical space.
Our approach can reconstruct 3D models for humans and animals with better qualitative and quantitative performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-17T13:49:04Z) - xCloth: Extracting Template-free Textured 3D Clothes from a Monocular
Image [4.056667956036515]
We present a novel framework for template-free textured 3D garment digitization.
More specifically, we propose to extend PeeledHuman representation to predict the pixel-aligned, layered depth and semantic maps.
We achieve high-fidelity 3D garment reconstruction results on three publicly available datasets and generalization on internet images.
arXiv Detail & Related papers (2022-08-27T05:57:00Z) - Self-Supervised Collision Handling via Generative 3D Garment Models for
Virtual Try-On [29.458328272854107]
We propose a new generative model for 3D garment deformations that enables us to learn, for the first time, a data-driven method for virtual try-on.
We show that our method is the first to successfully address garment-body contact in unseen body shapes and motions, without compromising realism and detail.
arXiv Detail & Related papers (2021-05-13T17:58:20Z) - Pose-Guided High-Resolution Appearance Transfer via Progressive Training [65.92031716146865]
We propose a pose-guided appearance transfer network for transferring a given reference appearance to a target pose in unprecedented image resolution.
Our network utilizes dense local descriptors including local perceptual loss and local discriminators to refine details.
Our model produces high-quality images, which can be further utilized in useful applications such as garment transfer between people.
arXiv Detail & Related papers (2020-08-27T03:18:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.