NSF: Neural Surface Fields for Human Modeling from Monocular Depth
- URL: http://arxiv.org/abs/2308.14847v4
- Date: Fri, 27 Oct 2023 19:30:35 GMT
- Title: NSF: Neural Surface Fields for Human Modeling from Monocular Depth
- Authors: Yuxuan Xue, Bharat Lal Bhatnagar, Riccardo Marin, Nikolaos Sarafianos,
Yuanlu Xu, Gerard Pons-Moll, Tony Tung
- Abstract summary: It is challenging to model dynamic and fine-grained clothing deformations from sparse data.
Existing methods for modeling 3D humans from depth data have limitations in terms of computational efficiency, mesh coherency, and flexibility in resolution and topology.
We propose a novel method Neural Surface Fields for modeling 3D clothed humans from monocular depth.
- Score: 46.928496022657185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Obtaining personalized 3D animatable avatars from a monocular camera has
several real world applications in gaming, virtual try-on, animation, and
VR/XR, etc. However, it is very challenging to model dynamic and fine-grained
clothing deformations from such sparse data. Existing methods for modeling 3D
humans from depth data have limitations in terms of computational efficiency,
mesh coherency, and flexibility in resolution and topology. For instance,
reconstructing shapes using implicit functions and extracting explicit meshes
per frame is computationally expensive and cannot ensure coherent meshes across
frames. Moreover, predicting per-vertex deformations on a pre-designed human
template with a discrete surface lacks flexibility in resolution and topology.
To overcome these limitations, we propose a novel method Neural Surface Fields
for modeling 3D clothed humans from monocular depth. NSF defines a neural field
solely on the base surface which models a continuous and flexible displacement
field. NSF can be adapted to the base surface with different resolution and
topology without retraining at inference time. Compared to existing approaches,
our method eliminates the expensive per-frame surface extraction while
maintaining mesh coherency, and is capable of reconstructing meshes with
arbitrary resolution without retraining. To foster research in this direction,
we release our code in project page at: https://yuxuan-xue.com/nsf.
Related papers
- NeuralClothSim: Neural Deformation Fields Meet the Thin Shell Theory [70.10550467873499]
We propose NeuralClothSim, a new quasistatic cloth simulator using thin shells.
Our memory-efficient solver operates on a new continuous coordinate-based surface representation called neural deformation fields.
arXiv Detail & Related papers (2023-08-24T17:59:54Z) - MoDA: Modeling Deformable 3D Objects from Casual Videos [84.29654142118018]
We propose neural dual quaternion blend skinning (NeuDBS) to achieve 3D point deformation without skin-collapsing artifacts.
In the endeavor to register 2D pixels across different frames, we establish a correspondence between canonical feature embeddings that encodes 3D points within the canonical space.
Our approach can reconstruct 3D models for humans and animals with better qualitative and quantitative performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-17T13:49:04Z) - NeAT: Learning Neural Implicit Surfaces with Arbitrary Topologies from
Multi-view Images [17.637064969966847]
NeAT is a new neural rendering framework that learns implicit surfaces with arbitrary topologies from multi-view images.
NeAT supports easy field-to-mesh conversion using the classic Marching Cubes algorithm.
Our approach is able to faithfully reconstruct both watertight and non-watertight surfaces.
arXiv Detail & Related papers (2023-03-21T16:49:41Z) - Neural Capture of Animatable 3D Human from Monocular Video [38.974181971541846]
We present a novel paradigm of building an animatable 3D human representation from a monocular video input, such that it can be rendered in any unseen poses and views.
Our method is based on a dynamic Neural Radiance Field (NeRF) rigged by a mesh-based parametric 3D human model serving as a geometry proxy.
arXiv Detail & Related papers (2022-08-18T09:20:48Z) - {\phi}-SfT: Shape-from-Template with a Physics-Based Deformation Model [69.27632025495512]
Shape-from-Template (SfT) methods estimate 3D surface deformations from a single monocular RGB camera.
This paper proposes a new SfT approach explaining 2D observations through physical simulations.
arXiv Detail & Related papers (2022-03-22T17:59:57Z) - Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos [63.16888987770885]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce a pose-driven deformation field based on the linear blend skinning algorithm.
We show that our approach significantly outperforms recent human modeling methods.
arXiv Detail & Related papers (2022-03-15T17:56:59Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.