Garment Avatars: Realistic Cloth Driving using Pattern Registration
- URL: http://arxiv.org/abs/2206.03373v1
- Date: Tue, 7 Jun 2022 15:06:55 GMT
- Title: Garment Avatars: Realistic Cloth Driving using Pattern Registration
- Authors: Oshri Halimi, Fabian Prada, Tuur Stuyck, Donglai Xiang, Timur
Bagautdinov, He Wen, Ron Kimmel, Takaaki Shiratori, Chenglei Wu, Yaser Sheikh
- Abstract summary: We propose an end-to-end pipeline for building drivable representations for clothing.
A Garment Avatar is an expressive and fully-drivable geometry model for a piece of clothing.
We demonstrate the efficacy of our pipeline on a realistic virtual telepresence application.
- Score: 39.936812232884954
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Virtual telepresence is the future of online communication. Clothing is an
essential part of a person's identity and self-expression. Yet, ground truth
data of registered clothes is currently unavailable in the required resolution
and accuracy for training telepresence models for realistic cloth animation.
Here, we propose an end-to-end pipeline for building drivable representations
for clothing. The core of our approach is a multi-view patterned cloth tracking
algorithm capable of capturing deformations with high accuracy. We further rely
on the high-quality data produced by our tracking method to build a Garment
Avatar: an expressive and fully-drivable geometry model for a piece of
clothing. The resulting model can be animated using a sparse set of views and
produces highly realistic reconstructions which are faithful to the driving
signals. We demonstrate the efficacy of our pipeline on a realistic virtual
telepresence application, where a garment is being reconstructed from two
views, and a user can pick and swap garment design as they wish. In addition,
we show a challenging scenario when driven exclusively with body pose, our
drivable garment avatar is capable of producing realistic cloth geometry of
significantly higher quality than the state-of-the-art.
Related papers
- Garment Animation NeRF with Color Editing [6.357662418254495]
We propose a novel approach to synthesize garment animations from body motion sequences without the need for an explicit garment proxy.
Our approach infers garment dynamic features from body motion, providing a preliminary overview of garment structure.
We demonstrate the generalizability of our method across unseen body motions and camera views, ensuring detailed structural consistency.
arXiv Detail & Related papers (2024-07-29T08:17:05Z) - Towards High-Quality 3D Motion Transfer with Realistic Apparel Animation [69.36162784152584]
We present a novel method aiming for high-quality motion transfer with realistic apparel animation.
We propose a data-driven pipeline that learns to disentangle body and apparel deformations via two neural deformation modules.
Our method produces results with superior quality for various types of apparel.
arXiv Detail & Related papers (2024-07-15T22:17:35Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - ClothFit: Cloth-Human-Attribute Guided Virtual Try-On Network Using 3D
Simulated Dataset [5.260305201345232]
We propose a novel virtual try-on method called ClothFit.
It can predict the draping shape of a garment on a target body based on the actual size of the garment and human attributes.
Our experimental results demonstrate that ClothFit can significantly improve the existing state-of-the-art methods in terms of photo-realistic virtual try-on results.
arXiv Detail & Related papers (2023-06-24T08:57:36Z) - Fill in Fabrics: Body-Aware Self-Supervised Inpainting for Image-Based
Virtual Try-On [3.5698678013121334]
We propose a self-supervised conditional generative adversarial network based framework comprised of a Fabricator and a Segmenter, Warper and Fuser.
The Fabricator reconstructs the clothing image when provided with a masked clothing as input, and learns the overall structure of the clothing by filling in fabrics.
A virtual try-on pipeline is then trained by transferring the learned representations from the Fabricator to Warper in an effort to warp and refine the target clothing.
arXiv Detail & Related papers (2022-10-03T13:25:31Z) - Drivable Volumetric Avatars using Texel-Aligned Features [52.89305658071045]
Photo telepresence requires both high-fidelity body modeling and faithful driving to enable dynamically synthesized appearance.
We propose an end-to-end framework that addresses two core challenges in modeling and driving full-body avatars of real people.
arXiv Detail & Related papers (2022-07-20T09:28:16Z) - Dressing Avatars: Deep Photorealistic Appearance for Physically
Simulated Clothing [49.96406805006839]
We introduce pose-driven avatars with explicit modeling of clothing that exhibit both realistic clothing dynamics and photorealistic appearance learned from real-world data.
Our key contribution is a physically-inspired appearance network, capable of generating photorealistic appearance with view-dependent and dynamic shadowing effects even for unseen body-clothing configurations.
arXiv Detail & Related papers (2022-06-30T17:58:20Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.