Leveraging Temporal Joint Depths for Improving 3D Human Pose Estimation
in Video
- URL: http://arxiv.org/abs/2011.02172v1
- Date: Wed, 4 Nov 2020 08:23:41 GMT
- Title: Leveraging Temporal Joint Depths for Improving 3D Human Pose Estimation
in Video
- Authors: Naoki Kato, Hiroto Honda, Yusuke Uchida
- Abstract summary: We propose to estimate a 3D pose in each frame of a video and refine it considering temporal information.
The proposed approach reduces the ambiguity of the joint depths and improves the 3D pose estimation accuracy.
- Score: 2.7501248535328306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The effectiveness of the approaches to predict 3D poses from 2D poses
estimated in each frame of a video has been demonstrated for 3D human pose
estimation. However, 2D poses without appearance information of persons have
much ambiguity with respect to the joint depths. In this paper, we propose to
estimate a 3D pose in each frame of a video and refine it considering temporal
information. The proposed approach reduces the ambiguity of the joint depths
and improves the 3D pose estimation accuracy.
Related papers
- SpaRP: Fast 3D Object Reconstruction and Pose Estimation from Sparse Views [36.02533658048349]
We propose a novel method, SpaRP, to reconstruct a 3D textured mesh and estimate the relative camera poses for sparse-view images.
SpaRP distills knowledge from 2D diffusion models and finetunes them to implicitly deduce the 3D spatial relationships between the sparse views.
It requires only about 20 seconds to produce a textured mesh and camera poses for the input views.
arXiv Detail & Related papers (2024-08-19T17:53:10Z) - UPose3D: Uncertainty-Aware 3D Human Pose Estimation with Cross-View and Temporal Cues [55.69339788566899]
UPose3D is a novel approach for multi-view 3D human pose estimation.
It improves robustness and flexibility without requiring direct 3D annotations.
arXiv Detail & Related papers (2024-04-23T00:18:00Z) - SPGNet: Spatial Projection Guided 3D Human Pose Estimation in Low
Dimensional Space [14.81199315166042]
We propose a method for 3D human pose estimation that mixes multi-dimensional re-projection into supervised learning.
Based on the estimation results for the dataset Human3.6M, our approach outperforms many state-of-the-art methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-06-04T00:51:00Z) - PONet: Robust 3D Human Pose Estimation via Learning Orientations Only [116.1502793612437]
We propose a novel Pose Orientation Net (PONet) that is able to robustly estimate 3D pose by learning orientations only.
PONet estimates the 3D orientation of these limbs by taking advantage of the local image evidence to recover the 3D pose.
We evaluate our method on multiple datasets, including Human3.6M, MPII, MPI-INF-3DHP, and 3DPW.
arXiv Detail & Related papers (2021-12-21T12:48:48Z) - PoseRN: A 2D pose refinement network for bias-free multi-view 3D human
pose estimation [21.51166171743293]
We propose a new 2D pose refinement network that learns to predict the human bias in the estimated 2D pose.
Our proposed pose refinement network allows us to efficiently remove the human bias in the estimated 2D poses and achieve highly accurate multi-view 3D human pose estimation.
arXiv Detail & Related papers (2021-07-07T03:49:36Z) - Self-Attentive 3D Human Pose and Shape Estimation from Videos [82.63503361008607]
We present a video-based learning algorithm for 3D human pose and shape estimation.
We exploit temporal information in videos and propose a self-attention module.
We evaluate our method on the 3DPW, MPI-INF-3DHP, and Human3.6M datasets.
arXiv Detail & Related papers (2021-03-26T00:02:19Z) - Graph and Temporal Convolutional Networks for 3D Multi-person Pose
Estimation in Monocular Videos [33.974241749058585]
We propose a novel framework integrating graph convolutional networks (GCNs) and temporal convolutional networks (TCNs) to robustly estimate camera-centric multi-person 3D poses.
In particular, we introduce a human-joint GCN, which employs the 2D pose estimator's confidence scores to improve the pose estimation results.
The two GCNs work together to estimate the spatial frame-wise 3D poses and can make use of both visible joint and bone information in the target frame to estimate the occluded or missing human-part information.
arXiv Detail & Related papers (2020-12-22T03:01:19Z) - Residual Pose: A Decoupled Approach for Depth-based 3D Human Pose
Estimation [18.103595280706593]
We leverage recent advances in reliable 2D pose estimation with CNN to estimate the 3D pose of people from depth images.
Our approach achieves very competitive results both in accuracy and speed on two public datasets.
arXiv Detail & Related papers (2020-11-10T10:08:13Z) - Towards Generalization of 3D Human Pose Estimation In The Wild [73.19542580408971]
3DBodyTex.Pose is a dataset that addresses the task of 3D human pose estimation in-the-wild.
3DBodyTex.Pose offers high quality and rich data containing 405 different real subjects in various clothing and poses, and 81k image samples with ground-truth 2D and 3D pose annotations.
arXiv Detail & Related papers (2020-04-21T13:31:58Z) - Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image
Synthesis [72.34794624243281]
We propose a self-supervised learning framework to disentangle variations from unlabeled video frames.
Our differentiable formalization, bridging the representation gap between the 3D pose and spatial part maps, allows us to operate on videos with diverse camera movements.
arXiv Detail & Related papers (2020-04-09T07:55:01Z) - Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A
Geometric Approach [76.10879433430466]
We propose to estimate 3D human pose from multi-view images and a few IMUs attached at person's limbs.
It operates by firstly detecting 2D poses from the two signals, and then lifting them to the 3D space.
The simple two-step approach reduces the error of the state-of-the-art by a large margin on a public dataset.
arXiv Detail & Related papers (2020-03-25T00:26:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.