Permutation-Invariant Relational Network for Multi-person 3D Pose
Estimation
- URL: http://arxiv.org/abs/2204.04913v1
- Date: Mon, 11 Apr 2022 07:23:54 GMT
- Title: Permutation-Invariant Relational Network for Multi-person 3D Pose
Estimation
- Authors: Nicolas Ugrinovic, Adria Ruiz, Antonio Agudo, Alberto Sanfeliu,
Francesc Moreno-Noguer
- Abstract summary: Recovering multi-person 3D poses from a single RGB image is a severely ill-conditioned problem.
Recent works have shown promising results by simultaneously reasoning for different people but in all cases within a local neighborhood.
PI-Net introduces a self-attention block to reason for all people in the image at the same time and refine potentially noisy initial 3D poses.
In this paper, we model people interactions at a whole, independently of their number, and in a permutation-invariant manner building upon the Set Transformer.
- Score: 46.38290735670527
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recovering multi-person 3D poses from a single RGB image is a severely
ill-conditioned problem due not only to the inherent 2D-3D depth ambiguity but
also because of inter-person occlusions and body truncations. Recent works have
shown promising results by simultaneously reasoning for different people but in
all cases within a local neighborhood. An interesting exception is PI-Net,
which introduces a self-attention block to reason for all people in the image
at the same time and refine potentially noisy initial 3D poses. However, the
proposed methodology requires defining one of the individuals as a reference,
and the outcome of the algorithm is sensitive to this choice. In this paper, we
model people interactions at a whole, independently of their number, and in a
permutation-invariant manner building upon the Set Transformer. We leverage on
this representation to refine the initial 3D poses estimated by off-the-shelf
detectors. A thorough evaluation demonstrates that our approach is able to
boost the performance of the initially estimated 3D poses by large margins,
achieving state-of-the-art results on MuPoTS-3D, CMU Panoptic and NBA2K
datasets. Additionally, the proposed module is computationally efficient and
can be used as a drop-in complement for any 3D pose detector in multi-people
scenes.
Related papers
- DiffuPose: Monocular 3D Human Pose Estimation via Denoising Diffusion
Probabilistic Model [25.223801390996435]
This paper focuses on reconstructing a 3D pose from a single 2D keypoint detection.
We build a novel diffusion-based framework to effectively sample diverse 3D poses from an off-the-shelf 2D detector.
We evaluate our method on the widely adopted Human3.6M and HumanEva-I datasets.
arXiv Detail & Related papers (2022-12-06T07:22:20Z) - Multi-initialization Optimization Network for Accurate 3D Human Pose and
Shape Estimation [75.44912541912252]
We propose a three-stage framework named Multi-Initialization Optimization Network (MION)
In the first stage, we strategically select different coarse 3D reconstruction candidates which are compatible with the 2D keypoints of input sample.
In the second stage, we design a mesh refinement transformer (MRT) to respectively refine each coarse reconstruction result via a self-attention mechanism.
Finally, a Consistency Estimation Network (CEN) is proposed to find the best result from mutiple candidates by evaluating if the visual evidence in RGB image matches a given 3D reconstruction.
arXiv Detail & Related papers (2021-12-24T02:43:58Z) - Shape-aware Multi-Person Pose Estimation from Multi-View Images [47.13919147134315]
Our proposed coarse-to-fine pipeline first aggregates noisy 2D observations from multiple camera views into 3D space.
The final pose estimates are attained from a novel optimization scheme which links high-confidence multi-view 2D observations and 3D joint candidates.
arXiv Detail & Related papers (2021-10-05T20:04:21Z) - VoxelTrack: Multi-Person 3D Human Pose Estimation and Tracking in the
Wild [98.69191256693703]
We present VoxelTrack for multi-person 3D pose estimation and tracking from a few cameras which are separated by wide baselines.
It employs a multi-branch network to jointly estimate 3D poses and re-identification (Re-ID) features for all people in the environment.
It outperforms the state-of-the-art methods by a large margin on three public datasets including Shelf, Campus and CMU Panoptic.
arXiv Detail & Related papers (2021-08-05T08:35:44Z) - Probabilistic Monocular 3D Human Pose Estimation with Normalizing Flows [24.0966076588569]
We propose a normalizing flow based method that exploits the deterministic 3D-to-2D mapping to solve the ambiguous inverse 2D-to-3D problem.
We evaluate our approach on the two benchmark datasets Human3.6M and MPI-INF-3DHP, outperforming all comparable methods in most metrics.
arXiv Detail & Related papers (2021-07-29T07:33:14Z) - PandaNet : Anchor-Based Single-Shot Multi-Person 3D Pose Estimation [35.791868530073955]
We present PandaNet, a new single-shot, anchor-based and multi-person 3D pose estimation approach.
The proposed model performs bounding box detection and, for each detected person, 2D and 3D pose regression into a single forward pass.
It does not need any post-processing to regroup joints since the network predicts a full 3D pose for each bounding box.
arXiv Detail & Related papers (2021-01-07T10:32:17Z) - Synthetic Training for Monocular Human Mesh Recovery [100.38109761268639]
This paper aims to estimate 3D mesh of multiple body parts with large-scale differences from a single RGB image.
The main challenge is lacking training data that have complete 3D annotations of all body parts in 2D images.
We propose a depth-to-scale (D2S) projection to incorporate the depth difference into the projection function to derive per-joint scale variants.
arXiv Detail & Related papers (2020-10-27T03:31:35Z) - SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation [46.85865451812981]
We propose a novel system that first regresses a set of 2.5D representations of body parts and then reconstructs the 3D absolute poses based on these 2.5D representations with a depth-aware part association algorithm.
Such a single-shot bottom-up scheme allows the system to better learn and reason about the inter-person depth relationship, improving both 3D and 2D pose estimation.
arXiv Detail & Related papers (2020-08-26T09:56:07Z) - Coherent Reconstruction of Multiple Humans from a Single Image [68.3319089392548]
In this work, we address the problem of multi-person 3D pose estimation from a single image.
A typical regression approach in the top-down setting of this problem would first detect all humans and then reconstruct each one of them independently.
Our goal is to train a single network that learns to avoid these problems and generate a coherent 3D reconstruction of all the humans in the scene.
arXiv Detail & Related papers (2020-06-15T17:51:45Z) - Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A
Geometric Approach [76.10879433430466]
We propose to estimate 3D human pose from multi-view images and a few IMUs attached at person's limbs.
It operates by firstly detecting 2D poses from the two signals, and then lifting them to the 3D space.
The simple two-step approach reduces the error of the state-of-the-art by a large margin on a public dataset.
arXiv Detail & Related papers (2020-03-25T00:26:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.