SPGNet: Spatial Projection Guided 3D Human Pose Estimation in Low
Dimensional Space
- URL: http://arxiv.org/abs/2206.01867v1
- Date: Sat, 4 Jun 2022 00:51:00 GMT
- Title: SPGNet: Spatial Projection Guided 3D Human Pose Estimation in Low
Dimensional Space
- Authors: Zihan Wang, Ruimin Chen, Mengxuan Liu, Guanfang Dong and Anup Basu
- Abstract summary: We propose a method for 3D human pose estimation that mixes multi-dimensional re-projection into supervised learning.
Based on the estimation results for the dataset Human3.6M, our approach outperforms many state-of-the-art methods both qualitatively and quantitatively.
- Score: 14.81199315166042
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method SPGNet for 3D human pose estimation that mixes
multi-dimensional re-projection into supervised learning. In this method, the
2D-to-3D-lifting network predicts the global position and coordinates of the 3D
human pose. Then, we re-project the estimated 3D pose back to the 2D key points
along with spatial adjustments. The loss functions compare the estimated 3D
pose with the 3D pose ground truth, and re-projected 2D pose with the input 2D
pose. In addition, we propose a kinematic constraint to restrict the predicted
target with constant human bone length. Based on the estimation results for the
dataset Human3.6M, our approach outperforms many state-of-the-art methods both
qualitatively and quantitatively.
Related papers
- MPL: Lifting 3D Human Pose from Multi-view 2D Poses [75.26416079541723]
We propose combining 2D pose estimation, for which large and rich training datasets exist, and 2D-to-3D pose lifting, using a transformer-based network.
Our experiments demonstrate decreases up to 45% in MPJPE errors compared to the 3D pose obtained by triangulating the 2D poses.
arXiv Detail & Related papers (2024-08-20T12:55:14Z) - Unsupervised Multi-Person 3D Human Pose Estimation From 2D Poses Alone [4.648549457266638]
We present one of the first studies investigating the feasibility of unsupervised multi-person 2D-3D pose estimation.
Our method involves independently lifting each subject's 2D pose to 3D, before combining them in a shared 3D coordinate system.
This by itself enables us to retrieve an accurate 3D reconstruction of their poses.
arXiv Detail & Related papers (2023-09-26T11:42:56Z) - MPM: A Unified 2D-3D Human Pose Representation via Masked Pose Modeling [59.74064212110042]
mpmcan handle multiple tasks including 3D human pose estimation, 3D pose estimation from cluded 2D pose, and 3D pose completion in a textocbfsingle framework.
We conduct extensive experiments and ablation studies on several widely used human pose datasets and achieve state-of-the-art performance on MPI-INF-3DHP.
arXiv Detail & Related papers (2023-06-29T10:30:00Z) - Learning to Estimate 3D Human Pose from Point Cloud [13.27496851711973]
We propose a deep human pose network for 3D pose estimation by taking the point cloud data as input data to model the surface of complex human structures.
Our experiments on two public datasets show that our approach achieves higher accuracy than previous state-of-art methods.
arXiv Detail & Related papers (2022-12-25T14:22:01Z) - PONet: Robust 3D Human Pose Estimation via Learning Orientations Only [116.1502793612437]
We propose a novel Pose Orientation Net (PONet) that is able to robustly estimate 3D pose by learning orientations only.
PONet estimates the 3D orientation of these limbs by taking advantage of the local image evidence to recover the 3D pose.
We evaluate our method on multiple datasets, including Human3.6M, MPII, MPI-INF-3DHP, and 3DPW.
arXiv Detail & Related papers (2021-12-21T12:48:48Z) - Lifting 2D Human Pose to 3D with Domain Adapted 3D Body Concept [49.49032810966848]
Existing 3D pose estimation suffers from 1) the inherent ambiguity between the 2D and 3D data, and 2) the lack of well labeled 2D-3D pose pairs in the wild.
We propose a new framework that leverages the labeled 3D human poses to learn a 3D concept of the human body to reduce the ambiguity.
By adapting the two domains, the body knowledge learned from 3D poses is applied to 2D poses and guides the 2D pose encoder to generate informative 3D "imagination" as embedding in pose lifting.
arXiv Detail & Related papers (2021-11-23T16:02:12Z) - SVMA: A GAN-based model for Monocular 3D Human Pose Estimation [0.8379286663107844]
We present an unsupervised GAN-based model to recover 3D human pose from 2D joint locations extracted from a single image.
Considering the reprojection constraint, our model can estimate the camera so that we can reproject the estimated 3D pose to the original 2D pose.
Results on Human3.6M show that our method outperforms all the state-of-the-art methods, and results on MPI-INF-3DHP show that our method outperforms state-of-the-art by approximately 15.0%.
arXiv Detail & Related papers (2021-06-10T09:43:57Z) - Weakly-supervised Cross-view 3D Human Pose Estimation [16.045255544594625]
We propose a simple yet effective pipeline for weakly-supervised cross-view 3D human pose estimation.
Our method can achieve state-of-the-art performance in a weakly-supervised manner.
We evaluate our method on the standard benchmark dataset, Human3.6M.
arXiv Detail & Related papers (2021-05-23T08:16:25Z) - 3DCrowdNet: 2D Human Pose-Guided3D Crowd Human Pose and Shape Estimation
in the Wild [61.92656990496212]
3DCrowdNet is a 2D human pose-guided 3D crowd pose and shape estimation system for in-the-wild scenes.
We show that our 3DCrowdNet outperforms previous methods on in-the-wild crowd scenes.
arXiv Detail & Related papers (2021-04-15T08:21:28Z) - Residual Pose: A Decoupled Approach for Depth-based 3D Human Pose
Estimation [18.103595280706593]
We leverage recent advances in reliable 2D pose estimation with CNN to estimate the 3D pose of people from depth images.
Our approach achieves very competitive results both in accuracy and speed on two public datasets.
arXiv Detail & Related papers (2020-11-10T10:08:13Z) - Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A
Geometric Approach [76.10879433430466]
We propose to estimate 3D human pose from multi-view images and a few IMUs attached at person's limbs.
It operates by firstly detecting 2D poses from the two signals, and then lifting them to the 3D space.
The simple two-step approach reduces the error of the state-of-the-art by a large margin on a public dataset.
arXiv Detail & Related papers (2020-03-25T00:26:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.