Weakly-Supervised 3D Reconstruction of Clothed Humans via Normal Maps
- URL: http://arxiv.org/abs/2311.16042v1
- Date: Mon, 27 Nov 2023 18:06:35 GMT
- Title: Weakly-Supervised 3D Reconstruction of Clothed Humans via Normal Maps
- Authors: Jane Wu, Diego Thomas, Ronald Fedkiw
- Abstract summary: We present a novel deep learning-based approach to the 3D reconstruction of clothed humans using weak supervision via 2D normal maps.
Given a single RGB image or multiview images, our network infers a signed distance function (SDF) discretized on a tetrahedral mesh surrounding the body in a rest pose.
We demonstrate the efficacy of our approach for both network inference and 3D reconstruction.
- Score: 1.6462601662291156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel deep learning-based approach to the 3D reconstruction of
clothed humans using weak supervision via 2D normal maps. Given a single RGB
image or multiview images, our network infers a signed distance function (SDF)
discretized on a tetrahedral mesh surrounding the body in a rest pose.
Subsequently, inferred pose and camera parameters are used to generate a normal
map from the SDF. A key aspect of our approach is the use of Marching
Tetrahedra to (uniquely) compute a triangulated surface from the SDF on the
tetrahedral mesh, facilitating straightforward differentiation (and thus
backpropagation). Thus, given only ground truth normal maps (with no volumetric
information ground truth information), we can train the network to produce SDF
values from corresponding RGB images. Optionally, an additional multiview loss
leads to improved results. We demonstrate the efficacy of our approach for both
network inference and 3D reconstruction.
Related papers
- Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - FIRe: Fast Inverse Rendering using Directional and Signed Distance
Functions [97.5540646069663]
We introduce a novel neural scene representation that we call the directional distance function (DDF)
Our DDF is defined on the unit sphere and predicts the distance to the surface along any given direction.
Based on our DDF, we present a novel fast algorithm (FIRe) to reconstruct 3D shapes given a posed depth map.
arXiv Detail & Related papers (2022-03-30T13:24:04Z) - Multi-initialization Optimization Network for Accurate 3D Human Pose and
Shape Estimation [75.44912541912252]
We propose a three-stage framework named Multi-Initialization Optimization Network (MION)
In the first stage, we strategically select different coarse 3D reconstruction candidates which are compatible with the 2D keypoints of input sample.
In the second stage, we design a mesh refinement transformer (MRT) to respectively refine each coarse reconstruction result via a self-attention mechanism.
Finally, a Consistency Estimation Network (CEN) is proposed to find the best result from mutiple candidates by evaluating if the visual evidence in RGB image matches a given 3D reconstruction.
arXiv Detail & Related papers (2021-12-24T02:43:58Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - Im2Mesh GAN: Accurate 3D Hand Mesh Recovery from a Single RGB Image [31.371190180801452]
We show that the hand mesh can be learned directly from the input image.
We propose a new type of GAN called Im2Mesh GAN to learn the mesh through end-to-end adversarial training.
arXiv Detail & Related papers (2021-01-27T07:38:01Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - 3D Human Mesh Regression with Dense Correspondence [95.92326689172877]
Estimating 3D mesh of the human body from a single 2D image is an important task with many applications such as augmented reality and Human-Robot interaction.
Prior works reconstructed 3D mesh from global image feature extracted by using convolutional neural network (CNN), where the dense correspondences between the mesh surface and the image pixels are missing.
This paper proposes a model-free 3D human mesh estimation framework, named DecoMR, which explicitly establishes the dense correspondence between the mesh and the local image features in the UV space.
arXiv Detail & Related papers (2020-06-10T08:50:53Z) - Atlas: End-to-End 3D Scene Reconstruction from Posed Images [13.154808583020229]
We present an end-to-end 3D reconstruction method for a scene by directly regressing a truncated signed distance function (TSDF) from a set of posed RGB images.
A 2D CNN extracts features from each image independently which are then back-projected and accumulated into a voxel volume.
A 3D CNN refines the accumulated features and predicts the TSDF values.
arXiv Detail & Related papers (2020-03-23T17:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.