Distribution-Aware Single-Stage Models for Multi-Person 3D Pose
Estimation
- URL: http://arxiv.org/abs/2203.07697v2
- Date: Wed, 16 Mar 2022 03:11:43 GMT
- Title: Distribution-Aware Single-Stage Models for Multi-Person 3D Pose
Estimation
- Authors: Zitian Wang, Xuecheng Nie, Xiaochao Qu, Yunpeng Chen, Si Liu
- Abstract summary: We present a novel Distribution-Aware Single-stage (DAS) model for tackling the challenging multi-person 3D pose estimation problem.
The proposed DAS model simultaneously localizes person positions and their corresponding body joints in the 3D camera space in a one-pass manner.
Comprehensive experiments on benchmarks CMU Panoptic and MuPoTS-3D demonstrate the superior efficiency of the proposed DAS model.
- Score: 29.430404703883084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a novel Distribution-Aware Single-stage (DAS) model
for tackling the challenging multi-person 3D pose estimation problem. Different
from existing top-down and bottom-up methods, the proposed DAS model
simultaneously localizes person positions and their corresponding body joints
in the 3D camera space in a one-pass manner. This leads to a simplified
pipeline with enhanced efficiency. In addition, DAS learns the true
distribution of body joints for the regression of their positions, rather than
making a simple Laplacian or Gaussian assumption as previous works. This
provides valuable priors for model prediction and thus boosts the
regression-based scheme to achieve competitive performance with volumetric-base
ones. Moreover, DAS exploits a recursive update strategy for progressively
approaching to regression target, alleviating the optimization difficulty and
further lifting the regression performance. DAS is implemented with a fully
Convolutional Neural Network and end-to-end learnable. Comprehensive
experiments on benchmarks CMU Panoptic and MuPoTS-3D demonstrate the superior
efficiency of the proposed DAS model, specifically 1.5x speedup over previous
best model, and its stat-of-the-art accuracy for multi-person 3D pose
estimation.
Related papers
- Addressing Concept Shift in Online Time Series Forecasting: Detect-then-Adapt [37.98336090671441]
Concept textbfDrift textbfDetection antextbfD textbfAdaptation (D3A)
It first detects drifting conception and then aggressively adapts the current model to the drifted concepts after the detection for rapid adaption.
It helps mitigate the data distribution gap, a critical factor contributing to train-test performance inconsistency.
arXiv Detail & Related papers (2024-03-22T04:44:43Z) - DiffHPE: Robust, Coherent 3D Human Pose Lifting with Diffusion [54.0238087499699]
We show that diffusion models enhance the accuracy, robustness, and coherence of human pose estimations.
We introduce DiffHPE, a novel strategy for harnessing diffusion models in 3D-HPE.
Our findings indicate that while standalone diffusion models provide commendable performance, their accuracy is even better in combination with supervised models.
arXiv Detail & Related papers (2023-09-04T12:54:10Z) - A Probabilistic Attention Model with Occlusion-aware Texture Regression
for 3D Hand Reconstruction from a Single RGB Image [5.725477071353354]
Deep learning approaches have shown promising results in 3D hand reconstruction from a single RGB image.
We propose a novel probabilistic model to achieve the robustness of model-based approaches.
We demonstrate the flexibility of the proposed probabilistic model to be trained in both supervised and weakly-supervised scenarios.
arXiv Detail & Related papers (2023-04-27T16:02:32Z) - Learned Vertex Descent: A New Direction for 3D Human Model Fitting [64.04726230507258]
We propose a novel optimization-based paradigm for 3D human model fitting on images and scans.
Our approach is able to capture the underlying body of clothed people with very different body shapes, achieving a significant improvement compared to state-of-the-art.
LVD is also applicable to 3D model fitting of humans and hands, for which we show a significant improvement to the SOTA with a much simpler and faster method.
arXiv Detail & Related papers (2022-05-12T17:55:51Z) - Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose
Estimation [70.32536356351706]
We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations.
We derive suitable measures to quantify prediction uncertainty at both pose and joint level.
We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-03-29T07:14:58Z) - Dynamic Iterative Refinement for Efficient 3D Hand Pose Estimation [87.54604263202941]
We propose a tiny deep neural network of which partial layers are iteratively exploited for refining its previous estimations.
We employ learned gating criteria to decide whether to exit from the weight-sharing loop, allowing per-sample adaptation in our model.
Our method consistently outperforms state-of-the-art 2D/3D hand pose estimation approaches in terms of both accuracy and efficiency for widely used benchmarks.
arXiv Detail & Related papers (2021-11-11T23:31:34Z) - Probabilistic Modeling for Human Mesh Recovery [73.11532990173441]
This paper focuses on the problem of 3D human reconstruction from 2D evidence.
We recast the problem as learning a mapping from the input to a distribution of plausible 3D poses.
arXiv Detail & Related papers (2021-08-26T17:55:11Z) - Inference Stage Optimization for Cross-scenario 3D Human Pose Estimation [97.93687743378106]
Existing 3D pose estimation models suffer performance drop when applying to new scenarios with unseen poses.
We propose a novel framework, Inference Stage Optimization (ISO), for improving the generalizability of 3D pose models.
Remarkably, it yields new state-of-the-art of 83.6% 3D PCK on MPI-INF-3DHP, improving upon the previous best result by 9.7%.
arXiv Detail & Related papers (2020-07-04T09:45:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.