TetraTSDF: 3D human reconstruction from a single image with a
tetrahedral outer shell
- URL: http://arxiv.org/abs/2004.10534v1
- Date: Wed, 22 Apr 2020 12:47:24 GMT
- Title: TetraTSDF: 3D human reconstruction from a single image with a
tetrahedral outer shell
- Authors: Hayato Onizuka, Zehra Hayirci, Diego Thomas, Akihiro Sugimoto, Hideaki
Uchiyama, Rin-ichiro Taniguchi
- Abstract summary: We propose a model for the human body and its corresponding part connection network (PCN) for 3D human body shape regression.
Our proposed model is compact, dense, accurate, and yet well suited for CNN-based regression task.
Results show that our proposed method allows to reconstruct detailed shapes of humans wearing loose clothes from single RGB images.
- Score: 11.800651452572563
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recovering the 3D shape of a person from its 2D appearance is ill-posed due
to ambiguities. Nevertheless, with the help of convolutional neural networks
(CNN) and prior knowledge on the 3D human body, it is possible to overcome such
ambiguities to recover detailed 3D shapes of human bodies from single images.
Current solutions, however, fail to reconstruct all the details of a person
wearing loose clothes. This is because of either (a) huge memory requirement
that cannot be maintained even on modern GPUs or (b) the compact 3D
representation that cannot encode all the details. In this paper, we propose
the tetrahedral outer shell volumetric truncated signed distance function
(TetraTSDF) model for the human body, and its corresponding part connection
network (PCN) for 3D human body shape regression. Our proposed model is
compact, dense, accurate, and yet well suited for CNN-based regression task.
Our proposed PCN allows us to learn the distribution of the TSDF in the
tetrahedral volume from a single image in an end-to-end manner. Results show
that our proposed method allows to reconstruct detailed shapes of humans
wearing loose clothes from single RGB images.
Related papers
- COSMU: Complete 3D human shape from monocular unconstrained images [24.08612483445495]
We present a novel framework to reconstruct complete 3D human shapes from a given target image by leveraging monocular unconstrained images.
The objective of this work is to reproduce high-quality details in regions of the reconstructed human body that are not visible in the input target.
arXiv Detail & Related papers (2024-07-15T10:06:59Z) - DiffHuman: Probabilistic Photorealistic 3D Reconstruction of Humans [38.8751809679184]
We present DiffHuman, a probabilistic method for 3D human reconstruction from a single RGB image.
Our experiments show that DiffHuman can produce diverse and detailed reconstructions for the parts of the person that are unseen or uncertain in the input image.
arXiv Detail & Related papers (2024-03-30T22:28:29Z) - SHARP: Shape-Aware Reconstruction of People in Loose Clothing [6.469298908778292]
SHARP (SHape Aware Reconstruction of People in loose clothing) is a novel end-to-end trainable network.
It recovers the 3D geometry and appearance of humans in loose clothing from a monocular image.
We show superior qualitative and quantitative performance than existing state-of-the-art methods.
arXiv Detail & Related papers (2022-05-24T10:26:42Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - NeuralReshaper: Single-image Human-body Retouching with Deep Neural
Networks [50.40798258968408]
We present NeuralReshaper, a novel method for semantic reshaping of human bodies in single images using deep generative networks.
Our approach follows a fit-then-reshape pipeline, which first fits a parametric 3D human model to a source human image.
To deal with the lack-of-data problem that no paired data exist, we introduce a novel self-supervised strategy to train our network.
arXiv Detail & Related papers (2022-03-20T09:02:13Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - SHARP: Shape-Aware Reconstruction of People In Loose Clothing [6.796748304066826]
3D human body reconstruction from monocular images is an interesting and ill-posed problem in computer vision.
We propose SHARP, a novel end-to-end trainable network that accurately recovers the detailed geometry and appearance of 3D people in loose clothing from a monocular image.
We evaluate SHARP on publicly available Cloth3D and THuman datasets and report superior performance to state-of-the-art approaches.
arXiv Detail & Related papers (2021-06-09T02:54:53Z) - 3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous
Image Data [77.57798334776353]
We consider the problem of obtaining dense 3D reconstructions of humans from single and partially occluded views.
We suggest that ambiguities can be modelled more effectively by parametrizing the possible body shapes and poses.
We show that our method outperforms alternative approaches in ambiguous pose recovery on standard benchmarks for 3D humans.
arXiv Detail & Related papers (2020-11-02T13:55:31Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.