Probabilistic Triangulation for Uncalibrated Multi-View 3D Human Pose
Estimation
- URL: http://arxiv.org/abs/2309.04756v1
- Date: Sat, 9 Sep 2023 11:03:37 GMT
- Title: Probabilistic Triangulation for Uncalibrated Multi-View 3D Human Pose
Estimation
- Authors: Boyuan Jiang, Lei Hu and Shihong Xia
- Abstract summary: This paper presents a novel Probabilistic Triangulation module that can be embedded in a calibrated 3D human pose estimation method.
Our method achieves a trade-off between estimation accuracy and generalizability.
- Score: 22.127170452402332
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D human pose estimation has been a long-standing challenge in computer
vision and graphics, where multi-view methods have significantly progressed but
are limited by the tedious calibration processes. Existing multi-view methods
are restricted to fixed camera pose and therefore lack generalization ability.
This paper presents a novel Probabilistic Triangulation module that can be
embedded in a calibrated 3D human pose estimation method, generalizing it to
uncalibration scenes. The key idea is to use a probability distribution to
model the camera pose and iteratively update the distribution from 2D features
instead of using camera pose. Specifically, We maintain a camera pose
distribution and then iteratively update this distribution by computing the
posterior probability of the camera pose through Monte Carlo sampling. This
way, the gradients can be directly back-propagated from the 3D pose estimation
to the 2D heatmap, enabling end-to-end training. Extensive experiments on
Human3.6M and CMU Panoptic demonstrate that our method outperforms other
uncalibration methods and achieves comparable results with state-of-the-art
calibration methods. Thus, our method achieves a trade-off between estimation
accuracy and generalizability. Our code is in
https://github.com/bymaths/probabilistic_triangulation
Related papers
- No Pose, No Problem: Surprisingly Simple 3D Gaussian Splats from Sparse Unposed Images [100.80376573969045]
NoPoSplat is a feed-forward model capable of reconstructing 3D scenes parameterized by 3D Gaussians from multi-view images.
Our model achieves real-time 3D Gaussian reconstruction during inference.
This work makes significant advances in pose-free generalizable 3D reconstruction and demonstrates its applicability to real-world scenarios.
arXiv Detail & Related papers (2024-10-31T17:58:22Z) - ADen: Adaptive Density Representations for Sparse-view Camera Pose Estimation [17.097170273209333]
Recovering camera poses from a set of images is a foundational task in 3D computer vision.
Recent data-driven approaches aim to directly output camera poses, either through regressing the 6DoF camera poses or formulating rotation as a probability distribution.
We propose ADen to unify the two frameworks by employing a generator and a discriminator.
arXiv Detail & Related papers (2024-08-16T22:45:46Z) - UPose3D: Uncertainty-Aware 3D Human Pose Estimation with Cross-View and Temporal Cues [55.69339788566899]
UPose3D is a novel approach for multi-view 3D human pose estimation.
It improves robustness and flexibility without requiring direct 3D annotations.
arXiv Detail & Related papers (2024-04-23T00:18:00Z) - Cameras as Rays: Pose Estimation via Ray Diffusion [54.098613859015856]
Estimating camera poses is a fundamental task for 3D reconstruction and remains challenging given sparsely sampled views.
We propose a distributed representation of camera pose that treats a camera as a bundle of rays.
Our proposed methods, both regression- and diffusion-based, demonstrate state-of-the-art performance on camera pose estimation on CO3D.
arXiv Detail & Related papers (2024-02-22T18:59:56Z) - D3PRefiner: A Diffusion-based Denoise Method for 3D Human Pose
Refinement [3.514184876338779]
A Diffusion-based 3D Pose Refiner is proposed to refine the output of any existing 3D pose estimator.
We leverage the architecture of current diffusion models to convert the distribution of noisy 3D poses into ground truth 3D poses.
Experimental results demonstrate the proposed architecture can significantly improve the performance of current sequence-to-sequence 3D pose estimators.
arXiv Detail & Related papers (2024-01-08T14:21:02Z) - ManiPose: Manifold-Constrained Multi-Hypothesis 3D Human Pose Estimation [54.86887812687023]
Most 3D-HPE methods rely on regression models, which assume a one-to-one mapping between inputs and outputs.
We propose ManiPose, a novel manifold-constrained multi-hypothesis model capable of proposing multiple candidate 3D poses for each 2D input.
Unlike previous multi-hypothesis approaches, our solution is completely supervised and does not rely on complex generative models.
arXiv Detail & Related papers (2023-12-11T13:50:10Z) - DiffPose: Multi-hypothesis Human Pose Estimation using Diffusion models [5.908471365011943]
We propose emphDiffPose, a conditional diffusion model that predicts multiple hypotheses for a given input image.
We show that DiffPose slightly improves upon the state of the art for multi-hypothesis pose estimation for simple poses and outperforms it by a large margin for highly ambiguous poses.
arXiv Detail & Related papers (2022-11-29T18:55:13Z) - Shape-aware Multi-Person Pose Estimation from Multi-View Images [47.13919147134315]
Our proposed coarse-to-fine pipeline first aggregates noisy 2D observations from multiple camera views into 3D space.
The final pose estimates are attained from a novel optimization scheme which links high-confidence multi-view 2D observations and 3D joint candidates.
arXiv Detail & Related papers (2021-10-05T20:04:21Z) - MetaPose: Fast 3D Pose from Multiple Views without 3D Supervision [72.5863451123577]
We show how to train a neural model that can perform accurate 3D pose and camera estimation.
Our method outperforms both classical bundle adjustment and weakly-supervised monocular 3D baselines.
arXiv Detail & Related papers (2021-08-10T18:39:56Z) - Probabilistic Monocular 3D Human Pose Estimation with Normalizing Flows [24.0966076588569]
We propose a normalizing flow based method that exploits the deterministic 3D-to-2D mapping to solve the ambiguous inverse 2D-to-3D problem.
We evaluate our approach on the two benchmark datasets Human3.6M and MPI-INF-3DHP, outperforming all comparable methods in most metrics.
arXiv Detail & Related papers (2021-07-29T07:33:14Z) - Synthetic Training for Monocular Human Mesh Recovery [100.38109761268639]
This paper aims to estimate 3D mesh of multiple body parts with large-scale differences from a single RGB image.
The main challenge is lacking training data that have complete 3D annotations of all body parts in 2D images.
We propose a depth-to-scale (D2S) projection to incorporate the depth difference into the projection function to derive per-joint scale variants.
arXiv Detail & Related papers (2020-10-27T03:31:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.