Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction
- URL: http://arxiv.org/abs/2007.11432v2
- Date: Fri, 26 Nov 2021 05:28:05 GMT
- Title: Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction
- Authors: Bharat Lal Bhatnagar, Cristian Sminchisescu, Christian Theobalt,
Gerard Pons-Moll
- Abstract summary: Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
- Score: 123.62341095156611
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit functions represented as deep learning approximations are powerful
for reconstructing 3D surfaces. However, they can only produce static surfaces
that are not controllable, which provides limited ability to modify the
resulting model by editing its pose or shape parameters. Nevertheless, such
features are essential in building flexible models for both computer graphics
and computer vision. In this work, we present methodology that combines
detail-rich implicit functions and parametric representations in order to
reconstruct 3D models of people that remain controllable and accurate even in
the presence of clothing. Given sparse 3D point clouds sampled on the surface
of a dressed person, we use an Implicit Part Network (IP-Net)to jointly predict
the outer 3D surface of the dressed person, the and inner body surface, and the
semantic correspondences to a parametric body model. We subsequently use
correspondences to fit the body model to our inner surface and then non-rigidly
deform it (under a parametric body + displacement model) to the outer surface
in order to capture garment, face and hair detail. In quantitative and
qualitative experiments with both full body data and hand scans we show that
the proposed methodology generalizes, and is effective even given incomplete
point clouds collected from single-view depth images. Our models and code can
be downloaded from http://virtualhumans.mpi-inf.mpg.de/ipnet.
Related papers
- ECON: Explicit Clothed humans Optimized via Normal integration [54.51948104460489]
We present ECON, a method for creating 3D humans in loose clothes.
It infers detailed 2D maps for the front and back side of a clothed person.
It "inpaints" the missing geometry between d-BiNI surfaces.
arXiv Detail & Related papers (2022-12-14T18:59:19Z) - Neural Capture of Animatable 3D Human from Monocular Video [38.974181971541846]
We present a novel paradigm of building an animatable 3D human representation from a monocular video input, such that it can be rendered in any unseen poses and views.
Our method is based on a dynamic Neural Radiance Field (NeRF) rigged by a mesh-based parametric 3D human model serving as a geometry proxy.
arXiv Detail & Related papers (2022-08-18T09:20:48Z) - Single-view 3D Body and Cloth Reconstruction under Complex Poses [37.86174829271747]
We extend existing implicit function-based models to deal with images of humans with arbitrary poses and self-occluded limbs.
We learn an implicit function that maps the input image to a 3D body shape with a low level of detail.
We then learn a displacement map, conditioned on the smoothed surface, which encodes the high-frequency details of the clothes and body.
arXiv Detail & Related papers (2022-05-09T07:34:06Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - imGHUM: Implicit Generative Models of 3D Human Shape and Articulated
Pose [42.4185273307021]
We present imGHUM, the first holistic generative model of 3D human shape and articulated pose.
We model the full human body implicitly as a function zero-level-set and without the use of an explicit template mesh.
arXiv Detail & Related papers (2021-08-24T17:08:28Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z) - S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling [103.65625425020129]
We represent the pedestrian's shape, pose and skinning weights as neural implicit functions that are directly learned from data.
We demonstrate the effectiveness of our approach on various datasets and show that our reconstructions outperform existing state-of-the-art methods.
arXiv Detail & Related papers (2021-01-17T02:16:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.