A Lightweight Graph Transformer Network for Human Mesh Reconstruction
from 2D Human Pose
- URL: http://arxiv.org/abs/2111.12696v1
- Date: Wed, 24 Nov 2021 18:48:03 GMT
- Title: A Lightweight Graph Transformer Network for Human Mesh Reconstruction
from 2D Human Pose
- Authors: Ce Zheng, Matias Mendieta, Pu Wang, Aidong Lu, Chen Chen
- Abstract summary: We present GTRS, a pose-based method that can reconstruct human mesh from 2D human pose.
We demonstrate the efficiency and generalization of GTRS by extensive evaluations on the Human3.6M and 3DPW datasets.
- Score: 8.816462200869445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing deep learning-based human mesh reconstruction approaches have a
tendency to build larger networks in order to achieve higher accuracy.
Computational complexity and model size are often neglected, despite being key
characteristics for practical use of human mesh reconstruction models (e.g.
virtual try-on systems). In this paper, we present GTRS, a lightweight
pose-based method that can reconstruct human mesh from 2D human pose. We
propose a pose analysis module that uses graph transformers to exploit
structured and implicit joint correlations, and a mesh regression module that
combines the extracted pose feature with the mesh template to reconstruct the
final human mesh. We demonstrate the efficiency and generalization of GTRS by
extensive evaluations on the Human3.6M and 3DPW datasets. In particular, GTRS
achieves better accuracy than the SOTA pose-based method Pose2Mesh while only
using 10.2% of the parameters (Params) and 2.5% of the FLOPs on the challenging
in-the-wild 3DPW dataset. Code will be publicly available.
Related papers
- Sampling is Matter: Point-guided 3D Human Mesh Reconstruction [0.0]
This paper presents a simple yet powerful method for 3D human mesh reconstruction from a single RGB image.
Experimental results on benchmark datasets show that the proposed method efficiently improves the performance of 3D human mesh reconstruction.
arXiv Detail & Related papers (2023-04-19T08:45:26Z) - A Modular Multi-stage Lightweight Graph Transformer Network for Human
Pose and Shape Estimation from 2D Human Pose [4.598337780022892]
We introduce a pose-based human mesh reconstruction approach that prioritizes computational efficiency without sacrificing reconstruction accuracy.
Our method consists of a 2D-to-3D lifter module that utilizes graph transformers to analyze structured and implicit joint correlations in 2D human poses, and a mesh regression module that combines the extracted pose features with a mesh template to produce the final human mesh parameters.
arXiv Detail & Related papers (2023-01-31T04:42:47Z) - Self-supervised Human Mesh Recovery with Cross-Representation Alignment [20.69546341109787]
Self-supervised human mesh recovery methods have poor generalizability due to limited availability and diversity of 3D-annotated benchmark datasets.
We propose cross-representation alignment utilizing the complementary information from the robust but sparse representation (2D keypoints)
This adaptive cross-representation alignment explicitly learns from the deviations and captures complementary information: richness from sparse representation and robustness from dense representation.
arXiv Detail & Related papers (2022-09-10T04:47:20Z) - Back to MLP: A Simple Baseline for Human Motion Prediction [59.18776744541904]
This paper tackles the problem of human motion prediction, consisting in forecasting future body poses from historically observed sequences.
We show that the performance of these approaches can be surpassed by a light-weight and purely architectural architecture with only 0.14M parameters.
An exhaustive evaluation on Human3.6M, AMASS and 3DPW datasets shows that our method, which we dub siMLPe, consistently outperforms all other approaches.
arXiv Detail & Related papers (2022-07-04T16:35:58Z) - Coarse-to-fine Animal Pose and Shape Estimation [67.39635503744395]
We propose a coarse-to-fine approach to reconstruct 3D animal mesh from a single image.
The coarse estimation stage first estimates the pose, shape and translation parameters of the SMAL model.
The estimated meshes are then used as a starting point by a graph convolutional network (GCN) to predict a per-vertex deformation in the refinement stage.
arXiv Detail & Related papers (2021-11-16T01:27:20Z) - THUNDR: Transformer-based 3D HUmaN Reconstruction with Markers [67.8628917474705]
THUNDR is a transformer-based deep neural network methodology to reconstruct the 3d pose and shape of people.
We show state-of-the-art results on Human3.6M and 3DPW, for both the fully-supervised and the self-supervised models.
We observe very solid 3d reconstruction performance for difficult human poses collected in the wild.
arXiv Detail & Related papers (2021-06-17T09:09:24Z) - 3D Human Pose Regression using Graph Convolutional Network [68.8204255655161]
We propose a graph convolutional network named PoseGraphNet for 3D human pose regression from 2D poses.
Our model's performance is close to the state-of-the-art, but with much fewer parameters.
arXiv Detail & Related papers (2021-05-21T14:41:31Z) - Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh
Recovery from a 2D Human Pose [70.23652933572647]
We propose a novel graph convolutional neural network (GraphCNN)-based system that estimates the 3D coordinates of human mesh vertices directly from the 2D human pose.
We show that our Pose2Mesh outperforms the previous 3D human pose and mesh estimation methods on various benchmark datasets.
arXiv Detail & Related papers (2020-08-20T16:01:56Z) - Learning Nonparametric Human Mesh Reconstruction from a Single Image
without Ground Truth Meshes [56.27436157101251]
We propose a novel approach to learn human mesh reconstruction without any ground truth meshes.
This is made possible by introducing two new terms into the loss function of a graph convolutional neural network (Graph CNN)
arXiv Detail & Related papers (2020-02-28T20:30:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.