3D Pose Detection in Videos: Focusing on Occlusion
- URL: http://arxiv.org/abs/2006.13517v1
- Date: Wed, 24 Jun 2020 07:01:17 GMT
- Title: 3D Pose Detection in Videos: Focusing on Occlusion
- Authors: Justin Wang, Edward Xu, Kangrui Xue, Lukasz Kidzinski
- Abstract summary: We build upon existing methods for occlusion-aware 3D pose detection in videos.
We implement a two stage architecture that consists of the stacked hourglass network to produce 2D pose predictions.
To facilitate prediction on poses with occluded joints, we introduce an intuitive generalization of the cylinder man model.
- Score: 0.4588028371034406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we build upon existing methods for occlusion-aware 3D pose
detection in videos. We implement a two stage architecture that consists of the
stacked hourglass network to produce 2D pose predictions, which are then
inputted into a temporal convolutional network to produce 3D pose predictions.
To facilitate prediction on poses with occluded joints, we introduce an
intuitive generalization of the cylinder man model used to generate occlusion
labels. We find that the occlusion-aware network is able to achieve a
mean-per-joint-position error 5 mm less than our linear baseline model on the
Human3.6M dataset. Compared to our temporal convolutional network baseline, we
achieve a comparable mean-per-joint-position error of 0.1 mm less at reduced
computational cost.
Related papers
- UPose3D: Uncertainty-Aware 3D Human Pose Estimation with Cross-View and Temporal Cues [55.69339788566899]
UPose3D is a novel approach for multi-view 3D human pose estimation.
It improves robustness and flexibility without requiring direct 3D annotations.
arXiv Detail & Related papers (2024-04-23T00:18:00Z) - Occlusion Resilient 3D Human Pose Estimation [52.49366182230432]
Occlusions remain one of the key challenges in 3D body pose estimation from single-camera video sequences.
We demonstrate the effectiveness of this approach compared to state-of-the-art techniques that infer poses from single-camera sequences.
arXiv Detail & Related papers (2024-02-16T19:29:43Z) - 3DiffTection: 3D Object Detection with Geometry-Aware Diffusion Features [70.50665869806188]
3DiffTection is a state-of-the-art method for 3D object detection from single images.
We fine-tune a diffusion model to perform novel view synthesis conditioned on a single image.
We further train the model on target data with detection supervision.
arXiv Detail & Related papers (2023-11-07T23:46:41Z) - LInKs "Lifting Independent Keypoints" -- Partial Pose Lifting for
Occlusion Handling with Improved Accuracy in 2D-3D Human Pose Estimation [4.648549457266638]
We present LInKs, a novel unsupervised learning method to recover 3D human poses from 2D kinematic skeletons.
Our approach follows a unique two-step process, which involves first lifting the occluded 2D pose to the 3D domain.
This lift-then-fill approach leads to significantly more accurate results compared to models that complete the pose in 2D space alone.
arXiv Detail & Related papers (2023-09-13T18:28:04Z) - (Fusionformer):Exploiting the Joint Motion Synergy with Fusion Network
Based On Transformer for 3D Human Pose Estimation [1.52292571922932]
Many previous methods lack the understanding of local joint information.cite8888987considers the temporal relationship of a single joint in this work.
Our proposed textbfFusionformer method introduces a global-temporal self-trajectory module and a cross-temporal self-trajectory module.
The results show an improvement of 2.4% MPJPE and 4.3% P-MPJPE on the Human3.6M dataset.
arXiv Detail & Related papers (2022-10-08T12:22:10Z) - Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose
Estimation [70.32536356351706]
We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations.
We derive suitable measures to quantify prediction uncertainty at both pose and joint level.
We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-03-29T07:14:58Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - Beyond Weak Perspective for Monocular 3D Human Pose Estimation [6.883305568568084]
We consider the task of 3D joints location and orientation prediction from a monocular video.
We first infer 2D joints locations with an off-the-shelf pose estimation algorithm.
We then adhere to the SMPLify algorithm which receives those initial parameters.
arXiv Detail & Related papers (2020-09-14T16:23:14Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.