ECON: Explicit Clothed humans Optimized via Normal integration
- URL: http://arxiv.org/abs/2212.07422v2
- Date: Thu, 23 Mar 2023 14:27:38 GMT
- Title: ECON: Explicit Clothed humans Optimized via Normal integration
- Authors: Yuliang Xiu, Jinlong Yang, Xu Cao, Dimitrios Tzionas, Michael J. Black
- Abstract summary: We present ECON, a method for creating 3D humans in loose clothes.
It infers detailed 2D maps for the front and back side of a clothed person.
It "inpaints" the missing geometry between d-BiNI surfaces.
- Score: 54.51948104460489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The combination of deep learning, artist-curated scans, and Implicit
Functions (IF), is enabling the creation of detailed, clothed, 3D humans from
images. However, existing methods are far from perfect. IF-based methods
recover free-form geometry, but produce disembodied limbs or degenerate shapes
for novel poses or clothes. To increase robustness for these cases, existing
work uses an explicit parametric body model to constrain surface
reconstruction, but this limits the recovery of free-form surfaces such as
loose clothing that deviates from the body. What we want is a method that
combines the best properties of implicit representation and explicit body
regularization. To this end, we make two key observations: (1) current networks
are better at inferring detailed 2D maps than full-3D surfaces, and (2) a
parametric model can be seen as a "canvas" for stitching together detailed
surface patches. Based on these, our method, ECON, has three main steps: (1) It
infers detailed 2D normal maps for the front and back side of a clothed person.
(2) From these, it recovers 2.5D front and back surfaces, called d-BiNI, that
are equally detailed, yet incomplete, and registers these w.r.t. each other
with the help of a SMPL-X body mesh recovered from the image. (3) It "inpaints"
the missing geometry between d-BiNI surfaces. If the face and hands are noisy,
they can optionally be replaced with the ones of SMPL-X. As a result, ECON
infers high-fidelity 3D humans even in loose clothes and challenging poses.
This goes beyond previous methods, according to the quantitative evaluation on
the CAPE and Renderpeople datasets. Perceptual studies also show that ECON's
perceived realism is better by a large margin. Code and models are available
for research purposes at econ.is.tue.mpg.de
Related papers
- FAMOUS: High-Fidelity Monocular 3D Human Digitization Using View Synthesis [51.193297565630886]
The challenge of accurately inferring texture remains, particularly in obscured areas such as the back of a person in frontal-view images.
This limitation in texture prediction largely stems from the scarcity of large-scale and diverse 3D datasets.
We propose leveraging extensive 2D fashion datasets to enhance both texture and shape prediction in 3D human digitization.
arXiv Detail & Related papers (2024-10-13T01:25:05Z) - RAFaRe: Learning Robust and Accurate Non-parametric 3D Face
Reconstruction from Pseudo 2D&3D Pairs [13.11105614044699]
We propose a robust and accurate non-parametric method for single-view 3D face reconstruction (SVFR)
A large-scale pseudo 2D&3D dataset is created by first rendering the detailed 3D faces, then swapping the face in the wild images with the rendered face.
Our model outperforms previous methods on FaceScape-wild/lab and MICC benchmarks.
arXiv Detail & Related papers (2023-02-10T19:40:26Z) - ICON: Implicit Clothed humans Obtained from Normals [49.5397825300977]
Implicit functions are well suited to the first task, as they can capture details like hair or clothes.
ICON infers detailed clothed-human normals conditioned on the SMPL(-X) normals.
ICON takes a step towards robust 3D clothed human reconstruction from in-the-wild images.
arXiv Detail & Related papers (2021-12-16T18:59:41Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh
Recovery from a 2D Human Pose [70.23652933572647]
We propose a novel graph convolutional neural network (GraphCNN)-based system that estimates the 3D coordinates of human mesh vertices directly from the 2D human pose.
We show that our Pose2Mesh outperforms the previous 3D human pose and mesh estimation methods on various benchmark datasets.
arXiv Detail & Related papers (2020-08-20T16:01:56Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.