GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human
Pose Estimation from Monocular Video
- URL: http://arxiv.org/abs/2307.05853v2
- Date: Sat, 22 Jul 2023 01:30:29 GMT
- Title: GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human
Pose Estimation from Monocular Video
- Authors: Bruce X.B. Yu, Zhi Zhang, Yongxu Liu, Sheng-hua Zhong, Yan Liu, Chang
Wen Chen
- Abstract summary: This work focuses on improving 3D human pose lifting via ground truth data.
Global-local Adaptive Graph Convolutional Network (GLA-GCN) is proposed in this work.
Our GLA-GCN implemented with ground truth 2D poses significantly outperforms state-of-the-art methods.
- Score: 33.801470193414325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D human pose estimation has been researched for decades with promising
fruits. 3D human pose lifting is one of the promising research directions
toward the task where both estimated pose and ground truth pose data are used
for training. Existing pose lifting works mainly focus on improving the
performance of estimated pose, but they usually underperform when testing on
the ground truth pose data. We observe that the performance of the estimated
pose can be easily improved by preparing good quality 2D pose, such as
fine-tuning the 2D pose or using advanced 2D pose detectors. As such, we
concentrate on improving the 3D human pose lifting via ground truth data for
the future improvement of more quality estimated pose data. Towards this goal,
a simple yet effective model called Global-local Adaptive Graph Convolutional
Network (GLA-GCN) is proposed in this work. Our GLA-GCN globally models the
spatiotemporal structure via a graph representation and backtraces local joint
features for 3D human pose estimation via individually connected layers. To
validate our model design, we conduct extensive experiments on three benchmark
datasets: Human3.6M, HumanEva-I, and MPI-INF-3DHP. Experimental results show
that our GLA-GCN implemented with ground truth 2D poses significantly
outperforms state-of-the-art methods (e.g., up to around 3%, 17%, and 14% error
reductions on Human3.6M, HumanEva-I, and MPI-INF-3DHP, respectively). GitHub:
https://github.com/bruceyo/GLA-GCN.
Related papers
- MPL: Lifting 3D Human Pose from Multi-view 2D Poses [75.26416079541723]
We propose combining 2D pose estimation, for which large and rich training datasets exist, and 2D-to-3D pose lifting, using a transformer-based network.
Our experiments demonstrate decreases up to 45% in MPJPE errors compared to the 3D pose obtained by triangulating the 2D poses.
arXiv Detail & Related papers (2024-08-20T12:55:14Z) - Decanus to Legatus: Synthetic training for 2D-3D human pose lifting [26.108023246654646]
We propose an algorithm to generate infinite 3D synthetic human poses (Legatus) from a 3D pose distribution based on 10 initial handcrafted 3D poses (Decanus)
Our results show that we can achieve 3D pose estimation performance comparable to methods using real data from specialized datasets but in a zero-shot setup, showing the potential of our framework.
arXiv Detail & Related papers (2022-10-05T13:10:19Z) - PoseGU: 3D Human Pose Estimation with Novel Human Pose Generator and
Unbiased Learning [36.609189237732394]
3D pose estimation has recently gained substantial interests in computer vision domain.
Existing 3D pose estimation methods have a strong reliance on large size well-annotated 3D pose datasets.
We propose PoseGU, a novel human pose generator that generates diverse poses with access only to a small size of seed samples.
arXiv Detail & Related papers (2022-07-07T23:43:53Z) - SPGNet: Spatial Projection Guided 3D Human Pose Estimation in Low
Dimensional Space [14.81199315166042]
We propose a method for 3D human pose estimation that mixes multi-dimensional re-projection into supervised learning.
Based on the estimation results for the dataset Human3.6M, our approach outperforms many state-of-the-art methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-06-04T00:51:00Z) - PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and
Hallucination under Self-supervision [102.48681650013698]
Existing self-supervised 3D human pose estimation schemes have largely relied on weak supervisions to guide the learning.
We propose a novel self-supervised approach that allows us to explicitly generate 2D-3D pose pairs for augmenting supervision.
This is made possible via introducing a reinforcement-learning-based imitator, which is learned jointly with a pose estimator alongside a pose hallucinator.
arXiv Detail & Related papers (2022-03-29T14:45:53Z) - PONet: Robust 3D Human Pose Estimation via Learning Orientations Only [116.1502793612437]
We propose a novel Pose Orientation Net (PONet) that is able to robustly estimate 3D pose by learning orientations only.
PONet estimates the 3D orientation of these limbs by taking advantage of the local image evidence to recover the 3D pose.
We evaluate our method on multiple datasets, including Human3.6M, MPII, MPI-INF-3DHP, and 3DPW.
arXiv Detail & Related papers (2021-12-21T12:48:48Z) - ElePose: Unsupervised 3D Human Pose Estimation by Predicting Camera
Elevation and Learning Normalizing Flows on 2D Poses [23.554957518485324]
We propose an unsupervised approach that learns to predict a 3D human pose from a single image.
We estimate the 3D pose that is most likely over random projections, with the likelihood estimated using normalizing flows on 2D poses.
We outperform the state-of-the-art unsupervised human pose estimation methods on the benchmark datasets Human3.6M and MPI-INF-3DHP in many metrics.
arXiv Detail & Related papers (2021-12-14T01:12:45Z) - Heuristic Weakly Supervised 3D Human Pose Estimation [13.82540778667711]
weakly supervised 3D human pose (HW-HuP) solution to estimate 3D poses in when no ground truth 3D pose data is available.
We show that HW-HuP meaningfully improves upon state-of-the-art models in two practical settings where 3D pose data can hardly be obtained: human poses in bed, and infant poses in the wild.
arXiv Detail & Related papers (2021-05-23T18:40:29Z) - Synthetic Training for Monocular Human Mesh Recovery [100.38109761268639]
This paper aims to estimate 3D mesh of multiple body parts with large-scale differences from a single RGB image.
The main challenge is lacking training data that have complete 3D annotations of all body parts in 2D images.
We propose a depth-to-scale (D2S) projection to incorporate the depth difference into the projection function to derive per-joint scale variants.
arXiv Detail & Related papers (2020-10-27T03:31:35Z) - Cascaded deep monocular 3D human pose estimation with evolutionary
training data [76.3478675752847]
Deep representation learning has achieved remarkable accuracy for monocular 3D human pose estimation.
This paper proposes a novel data augmentation method that is scalable for massive amount of training data.
Our method synthesizes unseen 3D human skeletons based on a hierarchical human representation and synthesizings inspired by prior knowledge.
arXiv Detail & Related papers (2020-06-14T03:09:52Z) - Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image
Synthesis [72.34794624243281]
We propose a self-supervised learning framework to disentangle variations from unlabeled video frames.
Our differentiable formalization, bridging the representation gap between the 3D pose and spatial part maps, allows us to operate on videos with diverse camera movements.
arXiv Detail & Related papers (2020-04-09T07:55:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.