Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh
Recovery from a 2D Human Pose
- URL: http://arxiv.org/abs/2008.09047v3
- Date: Tue, 27 Apr 2021 08:48:41 GMT
- Title: Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh
Recovery from a 2D Human Pose
- Authors: Hongsuk Choi, Gyeongsik Moon, Kyoung Mu Lee
- Abstract summary: We propose a novel graph convolutional neural network (GraphCNN)-based system that estimates the 3D coordinates of human mesh vertices directly from the 2D human pose.
We show that our Pose2Mesh outperforms the previous 3D human pose and mesh estimation methods on various benchmark datasets.
- Score: 70.23652933572647
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Most of the recent deep learning-based 3D human pose and mesh estimation
methods regress the pose and shape parameters of human mesh models, such as
SMPL and MANO, from an input image. The first weakness of these methods is an
appearance domain gap problem, due to different image appearance between train
data from controlled environments, such as a laboratory, and test data from
in-the-wild environments. The second weakness is that the estimation of the
pose parameters is quite challenging owing to the representation issues of 3D
rotations. To overcome the above weaknesses, we propose Pose2Mesh, a novel
graph convolutional neural network (GraphCNN)-based system that estimates the
3D coordinates of human mesh vertices directly from the 2D human pose. The 2D
human pose as input provides essential human body articulation information,
while having a relatively homogeneous geometric property between the two
domains. Also, the proposed system avoids the representation issues, while
fully exploiting the mesh topology using a GraphCNN in a coarse-to-fine manner.
We show that our Pose2Mesh outperforms the previous 3D human pose and mesh
estimation methods on various benchmark datasets. For the codes, see
https://github.com/hongsukchoi/Pose2Mesh_RELEASE.
Related papers
- FAMOUS: High-Fidelity Monocular 3D Human Digitization Using View Synthesis [51.193297565630886]
The challenge of accurately inferring texture remains, particularly in obscured areas such as the back of a person in frontal-view images.
This limitation in texture prediction largely stems from the scarcity of large-scale and diverse 3D datasets.
We propose leveraging extensive 2D fashion datasets to enhance both texture and shape prediction in 3D human digitization.
arXiv Detail & Related papers (2024-10-13T01:25:05Z) - Co-Evolution of Pose and Mesh for 3D Human Body Estimation from Video [23.93644678238666]
We propose a Pose and Mesh Co-Evolution network (PMCE) to recover 3D human motion from a video.
The proposed PMCE outperforms previous state-of-the-art methods in terms of both per-frame accuracy and temporal consistency.
arXiv Detail & Related papers (2023-08-20T16:03:21Z) - MPM: A Unified 2D-3D Human Pose Representation via Masked Pose Modeling [59.74064212110042]
mpmcan handle multiple tasks including 3D human pose estimation, 3D pose estimation from cluded 2D pose, and 3D pose completion in a textocbfsingle framework.
We conduct extensive experiments and ablation studies on several widely used human pose datasets and achieve state-of-the-art performance on MPI-INF-3DHP.
arXiv Detail & Related papers (2023-06-29T10:30:00Z) - Sampling is Matter: Point-guided 3D Human Mesh Reconstruction [0.0]
This paper presents a simple yet powerful method for 3D human mesh reconstruction from a single RGB image.
Experimental results on benchmark datasets show that the proposed method efficiently improves the performance of 3D human mesh reconstruction.
arXiv Detail & Related papers (2023-04-19T08:45:26Z) - MUG: Multi-human Graph Network for 3D Mesh Reconstruction from 2D Pose [20.099670445427964]
Reconstructing multi-human body mesh from a single monocular image is an important but challenging computer vision problem.
In this work, through a single graph neural network, we construct coherent multi-human meshes using only multi-human 2D pose as input.
arXiv Detail & Related papers (2022-05-25T08:54:52Z) - 3D Human Pose Regression using Graph Convolutional Network [68.8204255655161]
We propose a graph convolutional network named PoseGraphNet for 3D human pose regression from 2D poses.
Our model's performance is close to the state-of-the-art, but with much fewer parameters.
arXiv Detail & Related papers (2021-05-21T14:41:31Z) - Unsupervised 3D Human Pose Representation with Viewpoint and Pose
Disentanglement [63.853412753242615]
Learning a good 3D human pose representation is important for human pose related tasks.
We propose a novel Siamese denoising autoencoder to learn a 3D pose representation.
Our approach achieves state-of-the-art performance on two inherently different tasks.
arXiv Detail & Related papers (2020-07-14T14:25:22Z) - Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A
Geometric Approach [76.10879433430466]
We propose to estimate 3D human pose from multi-view images and a few IMUs attached at person's limbs.
It operates by firstly detecting 2D poses from the two signals, and then lifting them to the 3D space.
The simple two-step approach reduces the error of the state-of-the-art by a large margin on a public dataset.
arXiv Detail & Related papers (2020-03-25T00:26:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.