Weakly Supervised Generative Network for Multiple 3D Human Pose
Hypotheses
- URL: http://arxiv.org/abs/2008.05770v1
- Date: Thu, 13 Aug 2020 09:26:01 GMT
- Title: Weakly Supervised Generative Network for Multiple 3D Human Pose
Hypotheses
- Authors: Chen Li and Gim Hee Lee
- Abstract summary: 3D human pose estimation from a single image is an inverse problem due to the inherent ambiguity of the missing depth.
We propose a weakly supervised deep generative network to address the inverse problem.
- Score: 74.48263583706712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D human pose estimation from a single image is an inverse problem due to the
inherent ambiguity of the missing depth. Several previous works addressed the
inverse problem by generating multiple hypotheses. However, these works are
strongly supervised and require ground truth 2D-to-3D correspondences which can
be difficult to obtain. In this paper, we propose a weakly supervised deep
generative network to address the inverse problem and circumvent the need for
ground truth 2D-to-3D correspondences. To this end, we design our network to
model a proposal distribution which we use to approximate the unknown
multi-modal target posterior distribution. We achieve the approximation by
minimizing the KL divergence between the proposal and target distributions, and
this leads to a 2D reprojection error and a prior loss term that can be weakly
supervised. Furthermore, we determine the most probable solution as the
conditional mode of the samples using the mean-shift algorithm. We evaluate our
method on three benchmark datasets -- Human3.6M, MPII and MPI-INF-3DHP.
Experimental results show that our approach is capable of generating multiple
feasible hypotheses and achieves state-of-the-art results compared to existing
weakly supervised approaches. Our source code is available at the project
website.
Related papers
- Utilizing Uncertainty in 2D Pose Detectors for Probabilistic 3D Human Mesh Recovery [23.473909489868454]
probabilistic approaches learn a distribution over plausible 3D human meshes.
We show that this objective function alone is not sufficient to best capture the full distributions.
We demonstrate that person segmentation masks can be utilized during training to significantly decrease the number of invalid samples.
arXiv Detail & Related papers (2024-11-25T11:13:12Z) - X as Supervision: Contending with Depth Ambiguity in Unsupervised Monocular 3D Pose Estimation [12.765995624408557]
We propose an unsupervised framework featuring a multi-hypothesis detector and multiple tailored pretext tasks.
The detector extracts multiple hypotheses from a heatmap within a local window, effectively managing the multi-solution problem.
The pretext tasks harness 3D human priors from the SMPL model to regularize the solution space of pose estimation, aligning it with the empirical distribution of 3D human structures.
arXiv Detail & Related papers (2024-11-20T04:18:11Z) - Diffusion-Based 3D Human Pose Estimation with Multi-Hypothesis
Aggregation [64.874000550443]
A Diffusion-based 3D Pose estimation (D3DP) method with Joint-wise reProjection-based Multi-hypothesis Aggregation (JPMA) is proposed.
The proposed JPMA assembles multiple hypotheses generated by D3DP into a single 3D pose for practical use.
Our method outperforms the state-of-the-art deterministic and probabilistic approaches by 1.5% and 8.9%, respectively.
arXiv Detail & Related papers (2023-03-21T04:00:47Z) - DiffuPose: Monocular 3D Human Pose Estimation via Denoising Diffusion
Probabilistic Model [25.223801390996435]
This paper focuses on reconstructing a 3D pose from a single 2D keypoint detection.
We build a novel diffusion-based framework to effectively sample diverse 3D poses from an off-the-shelf 2D detector.
We evaluate our method on the widely adopted Human3.6M and HumanEva-I datasets.
arXiv Detail & Related papers (2022-12-06T07:22:20Z) - Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose
Estimation [70.32536356351706]
We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations.
We derive suitable measures to quantify prediction uncertainty at both pose and joint level.
We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-03-29T07:14:58Z) - Multi-initialization Optimization Network for Accurate 3D Human Pose and
Shape Estimation [75.44912541912252]
We propose a three-stage framework named Multi-Initialization Optimization Network (MION)
In the first stage, we strategically select different coarse 3D reconstruction candidates which are compatible with the 2D keypoints of input sample.
In the second stage, we design a mesh refinement transformer (MRT) to respectively refine each coarse reconstruction result via a self-attention mechanism.
Finally, a Consistency Estimation Network (CEN) is proposed to find the best result from mutiple candidates by evaluating if the visual evidence in RGB image matches a given 3D reconstruction.
arXiv Detail & Related papers (2021-12-24T02:43:58Z) - Probabilistic Monocular 3D Human Pose Estimation with Normalizing Flows [24.0966076588569]
We propose a normalizing flow based method that exploits the deterministic 3D-to-2D mapping to solve the ambiguous inverse 2D-to-3D problem.
We evaluate our approach on the two benchmark datasets Human3.6M and MPI-INF-3DHP, outperforming all comparable methods in most metrics.
arXiv Detail & Related papers (2021-07-29T07:33:14Z) - 3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous
Image Data [77.57798334776353]
We consider the problem of obtaining dense 3D reconstructions of humans from single and partially occluded views.
We suggest that ambiguities can be modelled more effectively by parametrizing the possible body shapes and poses.
We show that our method outperforms alternative approaches in ambiguous pose recovery on standard benchmarks for 3D humans.
arXiv Detail & Related papers (2020-11-02T13:55:31Z) - Coherent Reconstruction of Multiple Humans from a Single Image [68.3319089392548]
In this work, we address the problem of multi-person 3D pose estimation from a single image.
A typical regression approach in the top-down setting of this problem would first detect all humans and then reconstruct each one of them independently.
Our goal is to train a single network that learns to avoid these problems and generate a coherent 3D reconstruction of all the humans in the scene.
arXiv Detail & Related papers (2020-06-15T17:51:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.