DLCA-Recon: Dynamic Loose Clothing Avatar Reconstruction from Monocular
Videos
- URL: http://arxiv.org/abs/2312.12096v2
- Date: Wed, 20 Dec 2023 05:21:26 GMT
- Title: DLCA-Recon: Dynamic Loose Clothing Avatar Reconstruction from Monocular
Videos
- Authors: Chunjie Luo, Fei Luo, Yusen Wang, Enxu Zhao, Chunxia Xiao
- Abstract summary: We propose a method named DLCA-Recon to create human avatars from monocular videos.
The distance from loose clothing to the underlying body rapidly changes in every frame when the human freely moves and acts.
Our method can produce superior results for humans with loose clothing compared to the SOTA methods.
- Score: 15.449755248457457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing a dynamic human with loose clothing is an important but
difficult task. To address this challenge, we propose a method named DLCA-Recon
to create human avatars from monocular videos. The distance from loose clothing
to the underlying body rapidly changes in every frame when the human freely
moves and acts. Previous methods lack effective geometric initialization and
constraints for guiding the optimization of deformation to explain this
dramatic change, resulting in the discontinuous and incomplete reconstruction
surface. To model the deformation more accurately, we propose to initialize an
estimated 3D clothed human in the canonical space, as it is easier for
deformation fields to learn from the clothed human than from SMPL. With both
representations of explicit mesh and implicit SDF, we utilize the physical
connection information between consecutive frames and propose a dynamic
deformation field (DDF) to optimize deformation fields. DDF accounts for
contributive forces on loose clothing to enhance the interpretability of
deformations and effectively capture the free movement of loose clothing.
Moreover, we propagate SMPL skinning weights to each individual and refine pose
and skinning weights during the optimization to improve skinning
transformation. Based on more reasonable initialization and DDF, we can
simulate real-world physics more accurately. Extensive experiments on public
and our own datasets validate that our method can produce superior results for
humans with loose clothing compared to the SOTA methods.
Related papers
- PBDyG: Position Based Dynamic Gaussians for Motion-Aware Clothed Human Avatars [18.101742122988707]
This paper introduces a novel clothed human model that can be learned from multiview RGB videos.
Our method realizes movement-dependent'' cloth deformation via physical simulation.
Experiments demonstrate that our method not only accurately reproduces appearance but also enables the reconstruction of avatars wearing highly deformable garments.
arXiv Detail & Related papers (2024-12-05T18:53:06Z) - Free-form Generation Enhances Challenging Clothed Human Modeling [20.33405634831369]
We propose a novel hybrid framework to model clothed humans.
Our core idea is to use dedicated strategies to model different regions, depending on whether they are close to or distant from the body.
Our method achieves state-of-the-art performance with superior visual fidelity and realism, particularly in the most challenging cases.
arXiv Detail & Related papers (2024-11-29T18:58:17Z) - PocoLoco: A Point Cloud Diffusion Model of Human Shape in Loose Clothing [97.83361232792214]
PocoLoco is the first template-free, point-based, pose-conditioned generative model for 3D humans in loose clothing.
We formulate avatar clothing deformation as a conditional point-cloud generation task within the denoising diffusion framework.
We release a dataset of two subjects performing various poses in loose clothing with a total of 75K point clouds.
arXiv Detail & Related papers (2024-11-06T20:42:13Z) - DressRecon: Freeform 4D Human Reconstruction from Monocular Video [64.61230035671885]
We present a method to reconstruct time-consistent human body models from monocular videos.
We focus on extremely loose clothing or handheld object interactions.
DressRecon yields higher-fidelity 3D reconstructions than prior art.
arXiv Detail & Related papers (2024-09-30T17:59:15Z) - Towards High-Quality 3D Motion Transfer with Realistic Apparel Animation [69.36162784152584]
We present a novel method aiming for high-quality motion transfer with realistic apparel animation.
We propose a data-driven pipeline that learns to disentangle body and apparel deformations via two neural deformation modules.
Our method produces results with superior quality for various types of apparel.
arXiv Detail & Related papers (2024-07-15T22:17:35Z) - PICA: Physics-Integrated Clothed Avatar [30.277983921620663]
We introduce PICA, a novel representation for high-fidelity animatable clothed human avatars with physics-accurate dynamics, even for loose clothing.
Our method achieves high-fidelity rendering of human bodies in complex and novel driving poses, significantly outperforming previous methods under the same settings.
arXiv Detail & Related papers (2024-07-07T10:23:21Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - MetaAvatar: Learning Animatable Clothed Human Models from Few Depth
Images [60.56518548286836]
To generate realistic cloth deformations from novel input poses, watertight meshes or dense full-body scans are usually needed as inputs.
We propose an approach that can quickly generate realistic clothed human avatars, represented as controllable neural SDFs, given only monocular depth images.
arXiv Detail & Related papers (2021-06-22T17:30:12Z) - Deep Physics-aware Inference of Cloth Deformation for Monocular Human
Performance Capture [84.73946704272113]
We show how integrating physics into the training process improves the learned cloth deformations and allows modeling clothing as a separate piece of geometry.
Our approach leads to a significant improvement over current state-of-the-art methods and is thus a clear step towards realistic monocular capture of the entire deforming surface of a human clothed.
arXiv Detail & Related papers (2020-11-25T16:46:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.