High-Quality Animatable Dynamic Garment Reconstruction from Monocular
Videos
- URL: http://arxiv.org/abs/2311.01214v1
- Date: Thu, 2 Nov 2023 13:16:27 GMT
- Title: High-Quality Animatable Dynamic Garment Reconstruction from Monocular
Videos
- Authors: Xiongzheng Li, Jinsong Zhang, Yu-Kun Lai, Jingyu Yang, Kun Li
- Abstract summary: We propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data.
To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network.
We show that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses.
- Score: 51.8323369577494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Much progress has been made in reconstructing garments from an image or a
video. However, none of existing works meet the expectations of digitizing
high-quality animatable dynamic garments that can be adjusted to various unseen
poses. In this paper, we propose the first method to recover high-quality
animatable dynamic garments from monocular videos without depending on scanned
data. To generate reasonable deformations for various unseen poses, we propose
a learnable garment deformation network that formulates the garment
reconstruction task as a pose-driven deformation problem. To alleviate the
ambiguity estimating 3D garments from monocular videos, we design a
multi-hypothesis deformation module that learns spatial representations of
multiple plausible deformations. Experimental results on several public
datasets demonstrate that our method can reconstruct high-quality dynamic
garments with coherent surface details, which can be easily animated under
unseen poses. The code will be provided for research purposes.
Related papers
- DressRecon: Freeform 4D Human Reconstruction from Monocular Video [64.61230035671885]
We present a method to reconstruct time-consistent human body models from monocular videos.
We focus on extremely loose clothing or handheld object interactions.
DressRecon yields higher-fidelity 3D reconstructions than prior art.
arXiv Detail & Related papers (2024-09-30T17:59:15Z) - VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation [79.99551055245071]
We propose VividPose, an end-to-end pipeline that ensures superior temporal stability.
An identity-aware appearance controller integrates additional facial information without compromising other appearance details.
A geometry-aware pose controller utilizes both dense rendering maps from SMPL-X and sparse skeleton maps.
VividPose exhibits superior generalization capabilities on our proposed in-the-wild dataset.
arXiv Detail & Related papers (2024-05-28T13:18:32Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos [23.25620556096607]
Reconstructing dynamic 3D garment surfaces with open boundaries from monocular videos is an important problem.
We introduce a novel approach, called REC-MV, to jointly optimize the explicit feature curves and the implicit signed distance field.
Our approach outperforms existing methods and can produce high-quality dynamic garment surfaces.
arXiv Detail & Related papers (2023-05-23T16:53:10Z) - PERGAMO: Personalized 3D Garments from Monocular Video [6.8338761008826445]
PERGAMO is a data-driven approach to learn a deformable model for 3D garments from monocular images.
We first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos.
We show that our method is capable of producing garment animations that match the real-world behaviour, and generalizes to unseen body motions extracted from motion capture dataset.
arXiv Detail & Related papers (2022-10-26T21:15:54Z) - Learning Motion-Dependent Appearance for High-Fidelity Rendering of
Dynamic Humans from a Single Camera [49.357174195542854]
A key challenge of learning the dynamics of the appearance lies in the requirement of a prohibitively large amount of observations.
We show that our method can generate a temporally coherent video of dynamic humans for unseen body poses and novel views given a single view video.
arXiv Detail & Related papers (2022-03-24T00:22:03Z) - Dynamic Neural Garments [45.833166320896716]
We present a solution that takes in body joint motion to directly produce realistic dynamic garment image sequences.
Specifically, given the target joint motion sequence of an avatar, we propose dynamic neural garments to jointly simulate and render plausible dynamic garment appearance.
arXiv Detail & Related papers (2021-02-23T17:21:21Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.