ManiPose: Manifold-Constrained Multi-Hypothesis 3D Human Pose Estimation
- URL: http://arxiv.org/abs/2312.06386v1
- Date: Mon, 11 Dec 2023 13:50:10 GMT
- Title: ManiPose: Manifold-Constrained Multi-Hypothesis 3D Human Pose Estimation
- Authors: C\'edric Rommel, Victor Letzelter, Nermin Samet, Renaud Marlet,
Matthieu Cord, Patrick P\'erez and Eduardo Valle
- Abstract summary: Most 3D-HPE methods rely on regression models, which assume a one-to-one mapping between inputs and outputs.
We propose ManiPose, a novel manifold-constrained multi-hypothesis model capable of proposing multiple candidate 3D poses for each 2D input.
Unlike previous multi-hypothesis approaches, our solution is completely supervised and does not rely on complex generative models.
- Score: 54.86887812687023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monocular 3D human pose estimation (3D-HPE) is an inherently ambiguous task,
as a 2D pose in an image might originate from different possible 3D poses. Yet,
most 3D-HPE methods rely on regression models, which assume a one-to-one
mapping between inputs and outputs. In this work, we provide theoretical and
empirical evidence that, because of this ambiguity, common regression models
are bound to predict topologically inconsistent poses, and that traditional
evaluation metrics, such as the MPJPE, P-MPJPE and PCK, are insufficient to
assess this aspect. As a solution, we propose ManiPose, a novel
manifold-constrained multi-hypothesis model capable of proposing multiple
candidate 3D poses for each 2D input, together with their corresponding
plausibility. Unlike previous multi-hypothesis approaches, our solution is
completely supervised and does not rely on complex generative models, thus
greatly facilitating its training and usage. Furthermore, by constraining our
model to lie within the human pose manifold, we can guarantee the consistency
of all hypothetical poses predicted with our approach, which was not possible
in previous works. We illustrate the usefulness of ManiPose in a synthetic
1D-to-2D lifting setting and demonstrate on real-world datasets that it
outperforms state-of-the-art models in pose consistency by a large margin,
while still reaching competitive MPJPE performance.
Related papers
- PoseGU: 3D Human Pose Estimation with Novel Human Pose Generator and
Unbiased Learning [36.609189237732394]
3D pose estimation has recently gained substantial interests in computer vision domain.
Existing 3D pose estimation methods have a strong reliance on large size well-annotated 3D pose datasets.
We propose PoseGU, a novel human pose generator that generates diverse poses with access only to a small size of seed samples.
arXiv Detail & Related papers (2022-07-07T23:43:53Z) - Probabilistic Monocular 3D Human Pose Estimation with Normalizing Flows [24.0966076588569]
We propose a normalizing flow based method that exploits the deterministic 3D-to-2D mapping to solve the ambiguous inverse 2D-to-3D problem.
We evaluate our approach on the two benchmark datasets Human3.6M and MPI-INF-3DHP, outperforming all comparable methods in most metrics.
arXiv Detail & Related papers (2021-07-29T07:33:14Z) - 3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous
Image Data [77.57798334776353]
We consider the problem of obtaining dense 3D reconstructions of humans from single and partially occluded views.
We suggest that ambiguities can be modelled more effectively by parametrizing the possible body shapes and poses.
We show that our method outperforms alternative approaches in ambiguous pose recovery on standard benchmarks for 3D humans.
arXiv Detail & Related papers (2020-11-02T13:55:31Z) - Synthetic Training for Monocular Human Mesh Recovery [100.38109761268639]
This paper aims to estimate 3D mesh of multiple body parts with large-scale differences from a single RGB image.
The main challenge is lacking training data that have complete 3D annotations of all body parts in 2D images.
We propose a depth-to-scale (D2S) projection to incorporate the depth difference into the projection function to derive per-joint scale variants.
arXiv Detail & Related papers (2020-10-27T03:31:35Z) - Multi-person 3D Pose Estimation in Crowded Scenes Based on Multi-View
Geometry [62.29762409558553]
Epipolar constraints are at the core of feature matching and depth estimation in multi-person 3D human pose estimation methods.
Despite the satisfactory performance of this formulation in sparser crowd scenes, its effectiveness is frequently challenged under denser crowd circumstances.
In this paper, we depart from the multi-person 3D pose estimation formulation, and instead reformulate it as crowd pose estimation.
arXiv Detail & Related papers (2020-07-21T17:59:36Z) - Kinematic-Structure-Preserved Representation for Unsupervised 3D Human
Pose Estimation [58.72192168935338]
Generalizability of human pose estimation models developed using supervision on large-scale in-studio datasets remains questionable.
We propose a novel kinematic-structure-preserved unsupervised 3D pose estimation framework, which is not restrained by any paired or unpaired weak supervisions.
Our proposed model employs three consecutive differentiable transformations named as forward-kinematics, camera-projection and spatial-map transformation.
arXiv Detail & Related papers (2020-06-24T23:56:33Z) - Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image
Synthesis [72.34794624243281]
We propose a self-supervised learning framework to disentangle variations from unlabeled video frames.
Our differentiable formalization, bridging the representation gap between the 3D pose and spatial part maps, allows us to operate on videos with diverse camera movements.
arXiv Detail & Related papers (2020-04-09T07:55:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.