Learning Nonparametric Human Mesh Reconstruction from a Single Image
without Ground Truth Meshes
- URL: http://arxiv.org/abs/2003.00052v1
- Date: Fri, 28 Feb 2020 20:30:07 GMT
- Title: Learning Nonparametric Human Mesh Reconstruction from a Single Image
without Ground Truth Meshes
- Authors: Kevin Lin, Lijuan Wang, Ying Jin, Zicheng Liu, Ming-Ting Sun
- Abstract summary: We propose a novel approach to learn human mesh reconstruction without any ground truth meshes.
This is made possible by introducing two new terms into the loss function of a graph convolutional neural network (Graph CNN)
- Score: 56.27436157101251
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nonparametric approaches have shown promising results on reconstructing 3D
human mesh from a single monocular image. Unlike previous approaches that use a
parametric human model like skinned multi-person linear model (SMPL), and
attempt to regress the model parameters, nonparametric approaches relax the
heavy reliance on the parametric space. However, existing nonparametric methods
require ground truth meshes as their regression target for each vertex, and
obtaining ground truth mesh labels is very expensive. In this paper, we propose
a novel approach to learn human mesh reconstruction without any ground truth
meshes. This is made possible by introducing two new terms into the loss
function of a graph convolutional neural network (Graph CNN). The first term is
the Laplacian prior that acts as a regularizer on the reconstructed mesh. The
second term is the part segmentation loss that forces the projected region of
the reconstructed mesh to match the part segmentation. Experimental results on
multiple public datasets show that without using 3D ground truth meshes, the
proposed approach outperforms the previous state-of-the-art approaches that
require ground truth meshes for training.
Related papers
- Sampling is Matter: Point-guided 3D Human Mesh Reconstruction [0.0]
This paper presents a simple yet powerful method for 3D human mesh reconstruction from a single RGB image.
Experimental results on benchmark datasets show that the proposed method efficiently improves the performance of 3D human mesh reconstruction.
arXiv Detail & Related papers (2023-04-19T08:45:26Z) - Self-supervised Human Mesh Recovery with Cross-Representation Alignment [20.69546341109787]
Self-supervised human mesh recovery methods have poor generalizability due to limited availability and diversity of 3D-annotated benchmark datasets.
We propose cross-representation alignment utilizing the complementary information from the robust but sparse representation (2D keypoints)
This adaptive cross-representation alignment explicitly learns from the deviations and captures complementary information: richness from sparse representation and robustness from dense representation.
arXiv Detail & Related papers (2022-09-10T04:47:20Z) - Adversarial Parametric Pose Prior [106.12437086990853]
We learn a prior that restricts the SMPL parameters to values that produce realistic poses via adversarial training.
We show that our learned prior covers the diversity of the real-data distribution, facilitates optimization for 3D reconstruction from 2D keypoints, and yields better pose estimates when used for regression from images.
arXiv Detail & Related papers (2021-12-08T10:05:32Z) - 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment
Feedback Loop [128.07841893637337]
Regression-based methods have recently shown promising results in reconstructing human meshes from monocular images.
Minor deviation in parameters may lead to noticeable misalignment between the estimated meshes and image evidences.
We propose a Pyramidal Mesh Alignment Feedback (PyMAF) loop to leverage a feature pyramid and rectify the predicted parameters.
arXiv Detail & Related papers (2021-03-30T17:07:49Z) - Im2Mesh GAN: Accurate 3D Hand Mesh Recovery from a Single RGB Image [31.371190180801452]
We show that the hand mesh can be learned directly from the input image.
We propose a new type of GAN called Im2Mesh GAN to learn the mesh through end-to-end adversarial training.
arXiv Detail & Related papers (2021-01-27T07:38:01Z) - Ellipse Regression with Predicted Uncertainties for Accurate Multi-View
3D Object Estimation [26.930403135038475]
This work considers objects whose three-dimensional models can be represented as ellipsoids.
We present a variant of Mask R-CNN for estimating the parameters of ellipsoidal objects by segmenting each object and accurately regressing the parameters of projection ellipses.
arXiv Detail & Related papers (2020-12-27T19:52:58Z) - 3D Dense Geometry-Guided Facial Expression Synthesis by Adversarial
Learning [54.24887282693925]
We propose a novel framework to exploit 3D dense (depth and surface normals) information for expression manipulation.
We use an off-the-shelf state-of-the-art 3D reconstruction model to estimate the depth and create a large-scale RGB-Depth dataset.
Our experiments demonstrate that the proposed method outperforms the competitive baseline and existing arts by a large margin.
arXiv Detail & Related papers (2020-09-30T17:12:35Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.