Explicit Clothing Modeling for an Animatable Full-Body Avatar
- URL: http://arxiv.org/abs/2106.14879v2
- Date: Wed, 30 Jun 2021 19:51:00 GMT
- Title: Explicit Clothing Modeling for an Animatable Full-Body Avatar
- Authors: Donglai Xiang, Fabian Andres Prada, Timur Bagautdinov, Weipeng Xu,
Yuan Dong, He Wen, Jessica Hodgins, Chenglei Wu
- Abstract summary: We build an animatable clothed body avatar with an explicit representation of the clothing on the upper body from multi-view captured videos.
To learn the interaction between the body dynamics and clothing states, we use a temporal convolution network to predict the clothing latent code.
We show photorealistic animation output for three different actors, and demonstrate the advantage of our clothed-body avatars over single-layer avatars.
- Score: 21.451440299450592
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work has shown great progress in building photorealistic animatable
full-body codec avatars, but these avatars still face difficulties in
generating high-fidelity animation of clothing. To address the difficulties, we
propose a method to build an animatable clothed body avatar with an explicit
representation of the clothing on the upper body from multi-view captured
videos. We use a two-layer mesh representation to separately register the 3D
scans with templates. In order to improve the photometric correspondence across
different frames, texture alignment is then performed through inverse rendering
of the clothing geometry and texture predicted by a variational autoencoder. We
then train a new two-layer codec avatar with separate modeling of the upper
clothing and the inner body layer. To learn the interaction between the body
dynamics and clothing states, we use a temporal convolution network to predict
the clothing latent code based on a sequence of input skeletal poses. We show
photorealistic animation output for three different actors, and demonstrate the
advantage of our clothed-body avatars over single-layer avatars in the previous
work. We also show the benefit of an explicit clothing model which allows the
clothing texture to be edited in the animation output.
Related papers
- PICA: Physics-Integrated Clothed Avatar [30.277983921620663]
We introduce PICA, a novel representation for high-fidelity animatable clothed human avatars with physics-accurate dynamics, even for loose clothing.
Our method achieves high-fidelity rendering of human bodies in complex and novel driving poses, significantly outperforming previous methods under the same settings.
arXiv Detail & Related papers (2024-07-07T10:23:21Z) - LayGA: Layered Gaussian Avatars for Animatable Clothing Transfer [40.372917698238204]
We present Layered Gaussian Avatars (LayGA), a new representation that formulates body and clothing as two separate layers.
Our representation is built upon the Gaussian map-based avatar for its excellent representation power of garment details.
In the single-layer reconstruction stage, we propose a series of geometric constraints to reconstruct smooth surfaces.
In the multi-layer fitting stage, we train two separate models to represent body and clothing and utilize the reconstructed clothing geometries as 3D supervision.
arXiv Detail & Related papers (2024-05-12T16:11:28Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - AvatarStudio: High-fidelity and Animatable 3D Avatar Creation from Text [71.09533176800707]
AvatarStudio is a coarse-to-fine generative model that generates explicit textured 3D meshes for animatable human avatars.
By effectively leveraging the synergy between the articulated mesh representation and the DensePose-conditional diffusion model, AvatarStudio can create high-quality avatars.
arXiv Detail & Related papers (2023-11-29T18:59:32Z) - AvatarFusion: Zero-shot Generation of Clothing-Decoupled 3D Avatars
Using 2D Diffusion [34.609403685504944]
We present AvatarFusion, a framework for zero-shot text-to-avatar generation.
We use a latent diffusion model to provide pixel-level guidance for generating human-realistic avatars.
We also introduce a novel optimization method, called Pixel-Semantics Difference-Sampling (PS-DS), which semantically separates the generation of body and clothes.
arXiv Detail & Related papers (2023-07-13T02:19:56Z) - DreamWaltz: Make a Scene with Complex 3D Animatable Avatars [68.49935994384047]
We present DreamWaltz, a novel framework for generating and animating complex 3D avatars given text guidance and parametric human body prior.
For animation, our method learns an animatable 3D avatar representation from abundant image priors of diffusion model conditioned on various poses.
arXiv Detail & Related papers (2023-05-21T17:59:39Z) - Capturing and Animation of Body and Clothing from Monocular Video [105.87228128022804]
We present SCARF, a hybrid model combining a mesh-based body with a neural radiance field.
integrating the mesh into the rendering enables us to optimize SCARF directly from monocular videos.
We demonstrate that SCARFs clothing with higher visual quality than existing methods, that the clothing deforms with changing body pose and body shape, and that clothing can be successfully transferred between avatars of different subjects.
arXiv Detail & Related papers (2022-10-04T19:34:05Z) - Dressing Avatars: Deep Photorealistic Appearance for Physically
Simulated Clothing [49.96406805006839]
We introduce pose-driven avatars with explicit modeling of clothing that exhibit both realistic clothing dynamics and photorealistic appearance learned from real-world data.
Our key contribution is a physically-inspired appearance network, capable of generating photorealistic appearance with view-dependent and dynamic shadowing effects even for unseen body-clothing configurations.
arXiv Detail & Related papers (2022-06-30T17:58:20Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.