HPRNet: Hierarchical Point Regression for Whole-Body Human Pose
Estimation
- URL: http://arxiv.org/abs/2106.04269v1
- Date: Tue, 8 Jun 2021 11:56:38 GMT
- Title: HPRNet: Hierarchical Point Regression for Whole-Body Human Pose
Estimation
- Authors: Nermin Samet and Emre Akbas
- Abstract summary: We present a new bottom-up one-stage method for whole-body pose estimation.
We build a hierarchical point representation of body parts and jointly regress them.
On the COCO WholeBody dataset, HPRNet significantly outperforms all previous bottom-up methods.
- Score: 13.198689566654108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a new bottom-up one-stage method for whole-body
pose estimation, which we name "hierarchical point regression," or HPRNet for
short, referring to the network that implements this method. To handle the
scale variance among different body parts, we build a hierarchical point
representation of body parts and jointly regress them. Unlike the existing
two-stage methods, our method predicts whole-body pose in a constant time
independent of the number of people in an image. On the COCO WholeBody dataset,
HPRNet significantly outperforms all previous bottom-up methods on the keypoint
detection of all whole-body parts (i.e. body, foot, face and hand); it also
achieves state-of-the-art results in the face (75.4 AP) and hand (50.4 AP)
keypoint detection. Code and models are available at
https://github.com/nerminsamet/HPRNet.git.
Related papers
- Effective Whole-body Pose Estimation with Two-stages Distillation [52.92064408970796]
Whole-body pose estimation localizes the human body, hand, face, and foot keypoints in an image.
We present a two-stage pose textbfDistillation for textbfWhole-body textbfPose estimators, named textbfDWPose, to improve their effectiveness and efficiency.
arXiv Detail & Related papers (2023-07-29T03:49:28Z) - Integrating Human Parsing and Pose Network for Human Action Recognition [12.308394270240463]
We introduce human parsing feature map as a novel modality for action recognition.
We propose Integrating Human Parsing and Pose Network (IPP-Net) for action recognition.
IPP-Net is the first to leverage both skeletons and human parsing feature maps dualbranch approach.
arXiv Detail & Related papers (2023-07-16T07:58:29Z) - AlphaPose: Whole-Body Regional Multi-Person Pose Estimation and Tracking
in Real-Time [47.19339667836196]
We present AlphaPose, a system that can perform accurate whole-body pose estimation and tracking jointly while running in realtime.
We show a significant improvement over current state-of-the-art methods in both speed and accuracy on COCO-wholebody, COCO, PoseTrack, and our proposed Halpe-FullBody pose estimation dataset.
arXiv Detail & Related papers (2022-11-07T09:15:38Z) - Bottom-Up 2D Pose Estimation via Dual Anatomical Centers for Small-Scale
Persons [75.86463396561744]
In multi-person 2D pose estimation, the bottom-up methods simultaneously predict poses for all persons.
Our method achieves 38.4% improvement on bounding box precision and 39.1% improvement on bounding box recall over the state of the art (SOTA)
For the human pose AP evaluation, we achieve a new SOTA (71.0 AP) on the COCO test-dev set with the single-scale testing.
arXiv Detail & Related papers (2022-08-25T10:09:10Z) - DECA: Deep viewpoint-Equivariant human pose estimation using Capsule
Autoencoders [3.2826250607043796]
We show that current 3D Human Pose Estimation methods tend to fail when dealing with viewpoints unseen at training time.
We propose a novel capsule autoencoder network with fast Variational Bayes capsule routing, named DECA.
In the experimental validation, we outperform other methods on depth images from both seen and unseen viewpoints, both top-view, and front-view.
arXiv Detail & Related papers (2021-08-19T08:46:15Z) - Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression [81.05772887221333]
We study the dense keypoint regression framework that is previously inferior to the keypoint detection and grouping framework.
We present a simple yet effective approach, named disentangled keypoint regression (DEKR)
We empirically show that the proposed direct regression method outperforms keypoint detection and grouping methods.
arXiv Detail & Related papers (2021-04-06T05:54:46Z) - Whole-Body Human Pose Estimation in the Wild [88.09875133989155]
COCO-WholeBody extends COCO dataset with whole-body annotations.
It is the first benchmark that has manual annotations on the entire human body.
A single-network model, named ZoomNet, is devised to take into account the hierarchical structure of the full human body.
arXiv Detail & Related papers (2020-07-23T08:35:26Z) - Graph-PCNN: Two Stage Human Pose Estimation with Graph Pose Refinement [54.29252286561449]
We propose a two-stage graph-based and model-agnostic framework, called Graph-PCNN.
In the first stage, heatmap regression network is applied to obtain a rough localization result, and a set of proposal keypoints, called guided points, are sampled.
In the second stage, for each guided point, different visual feature is extracted by the localization.
The relationship between guided points is explored by the graph pose refinement module to get more accurate localization results.
arXiv Detail & Related papers (2020-07-21T04:59:15Z) - Bottom-Up Human Pose Estimation by Ranking Heatmap-Guided Adaptive
Keypoint Estimates [76.51095823248104]
We present several schemes that are rarely or unthoroughly studied before for improving keypoint detection and grouping (keypoint regression) performance.
First, we exploit the keypoint heatmaps for pixel-wise keypoint regression instead of separating them for improving keypoint regression.
Second, we adopt a pixel-wise spatial transformer network to learn adaptive representations for handling the scale and orientation variance.
Third, we present a joint shape and heatvalue scoring scheme to promote the estimated poses that are more likely to be true poses.
arXiv Detail & Related papers (2020-06-28T01:14:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.