A Modular Multi-stage Lightweight Graph Transformer Network for Human
Pose and Shape Estimation from 2D Human Pose
- URL: http://arxiv.org/abs/2301.13403v1
- Date: Tue, 31 Jan 2023 04:42:47 GMT
- Title: A Modular Multi-stage Lightweight Graph Transformer Network for Human
Pose and Shape Estimation from 2D Human Pose
- Authors: Ayman Ali, Ekkasit Pinyoanuntapong, Pu Wang, Mohsen Dorodchi
- Abstract summary: We introduce a pose-based human mesh reconstruction approach that prioritizes computational efficiency without sacrificing reconstruction accuracy.
Our method consists of a 2D-to-3D lifter module that utilizes graph transformers to analyze structured and implicit joint correlations in 2D human poses, and a mesh regression module that combines the extracted pose features with a mesh template to produce the final human mesh parameters.
- Score: 4.598337780022892
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this research, we address the challenge faced by existing deep
learning-based human mesh reconstruction methods in balancing accuracy and
computational efficiency. These methods typically prioritize accuracy,
resulting in large network sizes and excessive computational complexity, which
may hinder their practical application in real-world scenarios, such as virtual
reality systems. To address this issue, we introduce a modular multi-stage
lightweight graph-based transformer network for human pose and shape estimation
from 2D human pose, a pose-based human mesh reconstruction approach that
prioritizes computational efficiency without sacrificing reconstruction
accuracy. Our method consists of a 2D-to-3D lifter module that utilizes graph
transformers to analyze structured and implicit joint correlations in 2D human
poses, and a mesh regression module that combines the extracted pose features
with a mesh template to produce the final human mesh parameters.
Related papers
- StackFLOW: Monocular Human-Object Reconstruction by Stacked Normalizing Flow with Offset [56.71580976007712]
We propose to use the Human-Object Offset between anchors which are densely sampled from the surface of human mesh and object mesh to represent human-object spatial relation.
Based on this representation, we propose Stacked Normalizing Flow (StackFLOW) to infer the posterior distribution of human-object spatial relations from the image.
During the optimization stage, we finetune the human body pose and object 6D pose by maximizing the likelihood of samples.
arXiv Detail & Related papers (2024-07-30T04:57:21Z) - SMPLer: Taming Transformers for Monocular 3D Human Shape and Pose Estimation [74.07836010698801]
We propose an SMPL-based Transformer framework (SMPLer) to address this issue.
SMPLer incorporates two key ingredients: a decoupled attention operation and an SMPL-based target representation.
Extensive experiments demonstrate the effectiveness of SMPLer against existing 3D human shape and pose estimation methods.
arXiv Detail & Related papers (2024-04-23T17:59:59Z) - Self-supervised Human Mesh Recovery with Cross-Representation Alignment [20.69546341109787]
Self-supervised human mesh recovery methods have poor generalizability due to limited availability and diversity of 3D-annotated benchmark datasets.
We propose cross-representation alignment utilizing the complementary information from the robust but sparse representation (2D keypoints)
This adaptive cross-representation alignment explicitly learns from the deviations and captures complementary information: richness from sparse representation and robustness from dense representation.
arXiv Detail & Related papers (2022-09-10T04:47:20Z) - Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose
Estimation [70.32536356351706]
We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations.
We derive suitable measures to quantify prediction uncertainty at both pose and joint level.
We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-03-29T07:14:58Z) - Adversarial Parametric Pose Prior [106.12437086990853]
We learn a prior that restricts the SMPL parameters to values that produce realistic poses via adversarial training.
We show that our learned prior covers the diversity of the real-data distribution, facilitates optimization for 3D reconstruction from 2D keypoints, and yields better pose estimates when used for regression from images.
arXiv Detail & Related papers (2021-12-08T10:05:32Z) - A Lightweight Graph Transformer Network for Human Mesh Reconstruction
from 2D Human Pose [8.816462200869445]
We present GTRS, a pose-based method that can reconstruct human mesh from 2D human pose.
We demonstrate the efficiency and generalization of GTRS by extensive evaluations on the Human3.6M and 3DPW datasets.
arXiv Detail & Related papers (2021-11-24T18:48:03Z) - THUNDR: Transformer-based 3D HUmaN Reconstruction with Markers [67.8628917474705]
THUNDR is a transformer-based deep neural network methodology to reconstruct the 3d pose and shape of people.
We show state-of-the-art results on Human3.6M and 3DPW, for both the fully-supervised and the self-supervised models.
We observe very solid 3d reconstruction performance for difficult human poses collected in the wild.
arXiv Detail & Related papers (2021-06-17T09:09:24Z) - PaMIR: Parametric Model-Conditioned Implicit Representation for
Image-based Human Reconstruction [67.08350202974434]
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
We show that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.
arXiv Detail & Related papers (2020-07-08T02:26:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.