JUMPS: Joints Upsampling Method for Pose Sequences
- URL: http://arxiv.org/abs/2007.01151v4
- Date: Wed, 14 Oct 2020 15:00:49 GMT
- Title: JUMPS: Joints Upsampling Method for Pose Sequences
- Authors: Lucas Mourot, Fran\c{c}ois Le Clerc, C\'edric Th\'ebault and Pierre
Hellier
- Abstract summary: We build on a deep generative model that combines aGenerative Adversarial Network (GAN) and an encoder.
We show experimentally that thelocalization accuracy of the additional joints is on average onpar with the original pose estimates.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human Pose Estimation is a low-level task useful forsurveillance, human
action recognition, and scene understandingat large. It also offers promising
perspectives for the animationof synthetic characters. For all these
applications, and especiallythe latter, estimating the positions of many joints
is desirablefor improved performance and realism. To this purpose, wepropose a
novel method called JUMPS for increasing the numberof joints in 2D pose
estimates and recovering occluded ormissing joints. We believe this is the
first attempt to addressthe issue. We build on a deep generative model that
combines aGenerative Adversarial Network (GAN) and an encoder. TheGAN learns
the distribution of high-resolution human posesequences, the encoder maps the
input low-resolution sequencesto its latent space. Inpainting is obtained by
computing the latentrepresentation whose decoding by the GAN generator
optimallymatches the joints locations at the input. Post-processing a 2Dpose
sequence using our method provides a richer representationof the character
motion. We show experimentally that thelocalization accuracy of the additional
joints is on average onpar with the original pose estimates.
Related papers
- No Pose, No Problem: Surprisingly Simple 3D Gaussian Splats from Sparse Unposed Images [100.80376573969045]
NoPoSplat is a feed-forward model capable of reconstructing 3D scenes parameterized by 3D Gaussians from multi-view images.
Our model achieves real-time 3D Gaussian reconstruction during inference.
This work makes significant advances in pose-free generalizable 3D reconstruction and demonstrates its applicability to real-world scenarios.
arXiv Detail & Related papers (2024-10-31T17:58:22Z) - PoseGraphNet++: Enriching 3D Human Pose with Orientation Estimation [43.261111977510105]
Existing skeleton-based 3D human pose estimation methods only predict joint positions.
We present PoseGraphNet++, a novel 2D-to-3D lifting Graph Convolution Network that predicts the complete human pose in 3D.
arXiv Detail & Related papers (2023-08-22T13:42:15Z) - PoseFormerV2: Exploring Frequency Domain for Efficient and Robust 3D
Human Pose Estimation [19.028127284305224]
We propose PoseFormerV2, which exploits a compact representation of lengthy skeleton sequences in the frequency domain to efficiently scale up the receptive field.
With minimum modifications to PoseFormer, the proposed method effectively fuses features both in the time domain and frequency domain, enjoying a better speed-accuracy trade-off than its precursor.
arXiv Detail & Related papers (2023-03-30T15:45:51Z) - Kinematic-aware Hierarchical Attention Network for Human Pose Estimation
in Videos [17.831839654593452]
Previous-based human pose estimation methods have shown promising results by leveraging features of consecutive frames.
Most approaches compromise accuracy to jitter and do not comprehend the temporal aspects of human motion.
We design an architecture that exploits kinematic keypoint features.
arXiv Detail & Related papers (2022-11-29T01:46:11Z) - Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose
Estimation [70.32536356351706]
We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations.
We derive suitable measures to quantify prediction uncertainty at both pose and joint level.
We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-03-29T07:14:58Z) - MSR-GCN: Multi-Scale Residual Graph Convolution Networks for Human
Motion Prediction [34.565986275769745]
We propose a novel Multi-Scale Residual Graph Convolution Network (MSR-GCN) for human pose prediction task.
Our proposed approach is evaluated on two standard benchmark datasets, i.e., the Human3.6M dataset and the CMU Mocap dataset.
arXiv Detail & Related papers (2021-08-16T15:26:23Z) - Improving Robustness and Accuracy via Relative Information Encoding in
3D Human Pose Estimation [59.94032196768748]
We propose a relative information encoding method that yields positional and temporal enhanced representations.
Our method outperforms state-of-the-art methods on two public datasets.
arXiv Detail & Related papers (2021-07-29T14:12:19Z) - An Adversarial Human Pose Estimation Network Injected with Graph
Structure [75.08618278188209]
In this paper, we design a novel generative adversarial network (GAN) to improve the localization accuracy of visible joints when some joints are invisible.
The network consists of two simple but efficient modules, Cascade Feature Network (CFN) and Graph Structure Network (GSN)
arXiv Detail & Related papers (2021-03-29T12:07:08Z) - HDNet: Human Depth Estimation for Multi-Person Camera-Space Localization [83.57863764231655]
We propose the Human Depth Estimation Network (HDNet), an end-to-end framework for absolute root joint localization.
A skeleton-based Graph Neural Network (GNN) is utilized to propagate features among joints.
We evaluate our HDNet on the root joint localization and root-relative 3D pose estimation tasks with two benchmark datasets.
arXiv Detail & Related papers (2020-07-17T12:44:23Z) - Anatomy-aware 3D Human Pose Estimation with Bone-based Pose
Decomposition [92.99291528676021]
Instead of directly regressing the 3D joint locations, we decompose the task into bone direction prediction and bone length prediction.
Our motivation is the fact that the bone lengths of a human skeleton remain consistent across time.
Our full model outperforms the previous best results on Human3.6M and MPI-INF-3DHP datasets.
arXiv Detail & Related papers (2020-02-24T15:49:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.