REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos
- URL: http://arxiv.org/abs/2305.14236v2
- Date: Sat, 27 May 2023 17:01:54 GMT
- Title: REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos
- Authors: Lingteng Qiu, Guanying Chen, Jiapeng Zhou, Mutian Xu, Junle Wang and
Xiaoguang Han
- Abstract summary: Reconstructing dynamic 3D garment surfaces with open boundaries from monocular videos is an important problem.
We introduce a novel approach, called REC-MV, to jointly optimize the explicit feature curves and the implicit signed distance field.
Our approach outperforms existing methods and can produce high-quality dynamic garment surfaces.
- Score: 23.25620556096607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing dynamic 3D garment surfaces with open boundaries from
monocular videos is an important problem as it provides a practical and
low-cost solution for clothes digitization. Recent neural rendering methods
achieve high-quality dynamic clothed human reconstruction results from
monocular video, but these methods cannot separate the garment surface from the
body. Moreover, despite existing garment reconstruction methods based on
feature curve representation demonstrating impressive results for garment
reconstruction from a single image, they struggle to generate temporally
consistent surfaces for the video input. To address the above limitations, in
this paper, we formulate this task as an optimization problem of 3D garment
feature curves and surface reconstruction from monocular video. We introduce a
novel approach, called REC-MV, to jointly optimize the explicit feature curves
and the implicit signed distance field (SDF) of the garments. Then the open
garment meshes can be extracted via garment template registration in the
canonical space. Experiments on multiple casually captured datasets show that
our approach outperforms existing methods and can produce high-quality dynamic
garment surfaces. The source code is available at
https://github.com/GAP-LAB-CUHK-SZ/REC-MV.
Related papers
- DressRecon: Freeform 4D Human Reconstruction from Monocular Video [64.61230035671885]
We present a method to reconstruct time-consistent human body models from monocular videos.
We focus on extremely loose clothing or handheld object interactions.
DressRecon yields higher-fidelity 3D reconstructions than prior art.
arXiv Detail & Related papers (2024-09-30T17:59:15Z) - ReLoo: Reconstructing Humans Dressed in Loose Garments from Monocular Video in the Wild [33.7726643918619]
ReLoo reconstructs high-quality 3D models of humans dressed in loose garments from monocular in-the-wild videos.
We first establish a layered neural human representation that decomposes clothed humans into a neural inner body and outer clothing.
A global optimization jointly optimize the shape, appearance, and deformations of the human body and clothing via multi-layer differentiable volume rendering.
arXiv Detail & Related papers (2024-09-23T17:58:39Z) - Gaussian Garments: Reconstructing Simulation-Ready Clothing with Photorealistic Appearance from Multi-View Video [66.98046635045685]
We introduce a novel approach for reconstructing realistic simulation-ready garment assets from multi-view videos.
Our method represents garments with a combination of a 3D mesh and a Gaussian texture that encodes both the color and high-frequency surface details.
This representation enables accurate registration of garment geometries to multi-view videos and helps disentangle albedo textures from lighting effects.
arXiv Detail & Related papers (2024-09-12T16:26:47Z) - Shape of Motion: 4D Reconstruction from a Single Video [51.04575075620677]
We introduce a method capable of reconstructing generic dynamic scenes, featuring explicit, full-sequence-long 3D motion.
We exploit the low-dimensional structure of 3D motion by representing scene motion with a compact set of SE3 motion bases.
Our method achieves state-of-the-art performance for both long-range 3D/2D motion estimation and novel view synthesis on dynamic scenes.
arXiv Detail & Related papers (2024-07-18T17:59:08Z) - SVG: 3D Stereoscopic Video Generation via Denoising Frame Matrix [60.48666051245761]
We propose a pose-free and training-free approach for generating 3D stereoscopic videos.
Our method warps a generated monocular video into camera views on stereoscopic baseline using estimated video depth.
We develop a disocclusion boundary re-injection scheme that further improves the quality of video inpainting.
arXiv Detail & Related papers (2024-06-29T08:33:55Z) - High-Quality Animatable Dynamic Garment Reconstruction from Monocular
Videos [51.8323369577494]
We propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data.
To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network.
We show that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses.
arXiv Detail & Related papers (2023-11-02T13:16:27Z) - PERGAMO: Personalized 3D Garments from Monocular Video [6.8338761008826445]
PERGAMO is a data-driven approach to learn a deformable model for 3D garments from monocular images.
We first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos.
We show that our method is capable of producing garment animations that match the real-world behaviour, and generalizes to unseen body motions extracted from motion capture dataset.
arXiv Detail & Related papers (2022-10-26T21:15:54Z) - NeuPhysics: Editable Neural Geometry and Physics from Monocular Videos [82.74918564737591]
We present a method for learning 3D geometry and physics parameters of a dynamic scene from only a monocular RGB video input.
Experiments show that our method achieves superior mesh and video reconstruction of dynamic scenes compared to competing Neural Field approaches.
arXiv Detail & Related papers (2022-10-22T04:57:55Z) - MonoClothCap: Towards Temporally Coherent Clothing Capture from
Monocular RGB Video [10.679773937444445]
We present a method to capture temporally coherent dynamic clothing deformation from a monocular RGB video input.
We build statistical deformation models for three types of clothing: T-shirt, short pants and long pants.
Our method produces temporally coherent reconstruction of body and clothing from monocular video.
arXiv Detail & Related papers (2020-09-22T17:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.