Multi-Person Absolute 3D Human Pose Estimation with Weak Depth
Supervision
- URL: http://arxiv.org/abs/2004.03989v1
- Date: Wed, 8 Apr 2020 13:29:22 GMT
- Title: Multi-Person Absolute 3D Human Pose Estimation with Weak Depth
Supervision
- Authors: Marton Veges, Andras Lorincz
- Abstract summary: We introduce a network that can be trained with additional RGB-D images in a weakly supervised fashion.
Our algorithm is a monocular, multi-person, absolute pose estimator.
We evaluate the algorithm on several benchmarks, showing a consistent improvement in error rates.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In 3D human pose estimation one of the biggest problems is the lack of large,
diverse datasets. This is especially true for multi-person 3D pose estimation,
where, to our knowledge, there are only machine generated annotations available
for training. To mitigate this issue, we introduce a network that can be
trained with additional RGB-D images in a weakly supervised fashion. Due to the
existence of cheap sensors, videos with depth maps are widely available, and
our method can exploit a large, unannotated dataset. Our algorithm is a
monocular, multi-person, absolute pose estimator. We evaluate the algorithm on
several benchmarks, showing a consistent improvement in error rates. Also, our
model achieves state-of-the-art results on the MuPoTS-3D dataset by a
considerable margin.
Related papers
- X as Supervision: Contending with Depth Ambiguity in Unsupervised Monocular 3D Pose Estimation [12.765995624408557]
We propose an unsupervised framework featuring a multi-hypothesis detector and multiple tailored pretext tasks.
The detector extracts multiple hypotheses from a heatmap within a local window, effectively managing the multi-solution problem.
The pretext tasks harness 3D human priors from the SMPL model to regularize the solution space of pose estimation, aligning it with the empirical distribution of 3D human structures.
arXiv Detail & Related papers (2024-11-20T04:18:11Z) - Multi-person 3D pose estimation from unlabelled data [2.54990557236581]
We present a model based on Graph Neural Networks capable of predicting the cross-view correspondence of the people in the scenario.
We also present a Multilayer Perceptron that takes the 2D points to yield the 3D poses of each person.
arXiv Detail & Related papers (2022-12-16T22:03:37Z) - Decanus to Legatus: Synthetic training for 2D-3D human pose lifting [26.108023246654646]
We propose an algorithm to generate infinite 3D synthetic human poses (Legatus) from a 3D pose distribution based on 10 initial handcrafted 3D poses (Decanus)
Our results show that we can achieve 3D pose estimation performance comparable to methods using real data from specialized datasets but in a zero-shot setup, showing the potential of our framework.
arXiv Detail & Related papers (2022-10-05T13:10:19Z) - On Triangulation as a Form of Self-Supervision for 3D Human Pose
Estimation [57.766049538913926]
Supervised approaches to 3D pose estimation from single images are remarkably effective when labeled data is abundant.
Much of the recent attention has shifted towards semi and (or) weakly supervised learning.
We propose to impose multi-view geometrical constraints by means of a differentiable triangulation and to use it as form of self-supervision during training when no labels are available.
arXiv Detail & Related papers (2022-03-29T19:11:54Z) - Self-Supervised 3D Human Pose Estimation with Multiple-View Geometry [2.7541825072548805]
We present a self-supervised learning algorithm for 3D human pose estimation of a single person based on a multiple-view camera system.
We propose a four-loss function learning algorithm, which does not require any 2D or 3D body pose ground-truth.
arXiv Detail & Related papers (2021-08-17T17:31:24Z) - MetaPose: Fast 3D Pose from Multiple Views without 3D Supervision [72.5863451123577]
We show how to train a neural model that can perform accurate 3D pose and camera estimation.
Our method outperforms both classical bundle adjustment and weakly-supervised monocular 3D baselines.
arXiv Detail & Related papers (2021-08-10T18:39:56Z) - VoxelTrack: Multi-Person 3D Human Pose Estimation and Tracking in the
Wild [98.69191256693703]
We present VoxelTrack for multi-person 3D pose estimation and tracking from a few cameras which are separated by wide baselines.
It employs a multi-branch network to jointly estimate 3D poses and re-identification (Re-ID) features for all people in the environment.
It outperforms the state-of-the-art methods by a large margin on three public datasets including Shelf, Campus and CMU Panoptic.
arXiv Detail & Related papers (2021-08-05T08:35:44Z) - Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo [71.59494156155309]
Existing approaches for multi-view 3D pose estimation explicitly establish cross-view correspondences to group 2D pose detections from multiple camera views.
We present our multi-view 3D pose estimation approach based on plane sweep stereo to jointly address the cross-view fusion and 3D pose reconstruction in a single shot.
arXiv Detail & Related papers (2021-04-06T03:49:35Z) - Synthetic Training for Monocular Human Mesh Recovery [100.38109761268639]
This paper aims to estimate 3D mesh of multiple body parts with large-scale differences from a single RGB image.
The main challenge is lacking training data that have complete 3D annotations of all body parts in 2D images.
We propose a depth-to-scale (D2S) projection to incorporate the depth difference into the projection function to derive per-joint scale variants.
arXiv Detail & Related papers (2020-10-27T03:31:35Z) - Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image
Synthesis [72.34794624243281]
We propose a self-supervised learning framework to disentangle variations from unlabeled video frames.
Our differentiable formalization, bridging the representation gap between the 3D pose and spatial part maps, allows us to operate on videos with diverse camera movements.
arXiv Detail & Related papers (2020-04-09T07:55:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.