Keypoint Communities
- URL: http://arxiv.org/abs/2110.00988v1
- Date: Sun, 3 Oct 2021 11:50:34 GMT
- Title: Keypoint Communities
- Authors: Duncan Zauss, Sven Kreiss, Alexandre Alahi
- Abstract summary: We present a fast bottom-up method that jointly detects over 100 keypoints on humans or objects.
We use a graph centrality measure to assign training weights to different parts of a pose.
Our method generalizes to car poses.
- Score: 87.06615538315003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a fast bottom-up method that jointly detects over 100 keypoints on
humans or objects, also referred to as human/object pose estimation. We model
all keypoints belonging to a human or an object -- the pose -- as a graph and
leverage insights from community detection to quantify the independence of
keypoints. We use a graph centrality measure to assign training weights to
different parts of a pose. Our proposed measure quantifies how tightly a
keypoint is connected to its neighborhood. Our experiments show that our method
outperforms all previous methods for human pose estimation with fine-grained
keypoint annotations on the face, the hands and the feet with a total of 133
keypoints. We also show that our method generalizes to car poses.
Related papers
- Evaluating Multiview Object Consistency in Humans and Image Models [68.36073530804296]
We leverage an experimental design from the cognitive sciences which requires zero-shot visual inferences about object shape.
We collect 35K trials of behavioral data from over 500 participants.
We then evaluate the performance of common vision models.
arXiv Detail & Related papers (2024-09-09T17:59:13Z) - 2D Human Pose Estimation with Explicit Anatomical Keypoints Structure
Constraints [15.124606575017621]
We present a novel 2D human pose estimation method with explicit anatomical keypoints structure constraints.
Our proposed model can be plugged in the most existing bottom-up or top-down human pose estimation methods.
Our methods perform favorably against the most existing bottom-up and top-down human pose estimation methods.
arXiv Detail & Related papers (2022-12-05T11:01:43Z) - Interacting Hand-Object Pose Estimation via Dense Mutual Attention [97.26400229871888]
3D hand-object pose estimation is the key to the success of many computer vision applications.
We propose a novel dense mutual attention mechanism that is able to model fine-grained dependencies between the hand and the object.
Our method is able to produce physically plausible poses with high quality and real-time inference speed.
arXiv Detail & Related papers (2022-11-16T10:01:33Z) - Single Person Pose Estimation: A Survey [45.144269986277365]
Human pose estimation in unconstrained images and videos is a fundamental computer vision task.
We summarize representative human pose methods in a structured taxonomy, with a particular focus on deep learning models and single-person image setting.
We examine and survey all the components of a typical human pose estimation pipeline, including data augmentation, model architecture and backbone.
arXiv Detail & Related papers (2021-09-21T09:53:15Z) - Greedy Offset-Guided Keypoint Grouping for Human Pose Estimation [31.468003041368814]
We employ an Hourglass Network to infer all the keypoints from different persons indiscriminately.
We greedily group the candidate keypoints into multiple human poses, utilizing the predicted guiding offsets.
Our approach is comparable to the state of the art on the challenging COCO dataset under fair conditions.
arXiv Detail & Related papers (2021-07-07T09:32:01Z) - Learning to Disambiguate Strongly Interacting Hands via Probabilistic
Per-pixel Part Segmentation [84.28064034301445]
Self-similarity, and the resulting ambiguities in assigning pixel observations to the respective hands, is a major cause of the final 3D pose error.
We propose DIGIT, a novel method for estimating the 3D poses of two interacting hands from a single monocular image.
We experimentally show that the proposed approach achieves new state-of-the-art performance on the InterHand2.6M dataset.
arXiv Detail & Related papers (2021-07-01T13:28:02Z) - Point-Set Anchors for Object Detection, Instance Segmentation and Pose
Estimation [85.96410825961966]
We argue that the image features extracted at a central point contain limited information for predicting distant keypoints or bounding box boundaries.
To facilitate inference, we propose to instead perform regression from a set of points placed at more advantageous positions.
We apply this proposed framework, called Point-Set Anchors, to object detection, instance segmentation, and human pose estimation.
arXiv Detail & Related papers (2020-07-06T15:59:56Z) - Self-supervised Keypoint Correspondences for Multi-Person Pose
Estimation and Tracking in Videos [32.43899916477434]
We propose an approach that relies on keypoint correspondences for associating persons in videos.
Instead of training the network for estimating keypoint correspondences on video data, it is trained on a large scale image datasets for human pose estimation.
Our approach achieves state-of-the-art results for multi-frame pose estimation and multi-person pose tracking on the PosTrack $2017$ and PoseTrack $2018$ data sets.
arXiv Detail & Related papers (2020-04-27T09:02:24Z) - Measuring Generalisation to Unseen Viewpoints, Articulations, Shapes and
Objects for 3D Hand Pose Estimation under Hand-Object Interaction [137.28465645405655]
HANDS'19 is a challenge to evaluate the abilities of current 3D hand pose estimators (HPEs) to interpolate and extrapolate the poses of a training set.
We show that the accuracy of state-of-the-art methods can drop, and that they fail mostly on poses absent from the training set.
arXiv Detail & Related papers (2020-03-30T19:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.