PARE: Part Attention Regressor for 3D Human Body Estimation
- URL: http://arxiv.org/abs/2104.08527v1
- Date: Sat, 17 Apr 2021 12:42:56 GMT
- Title: PARE: Part Attention Regressor for 3D Human Body Estimation
- Authors: Muhammed Kocabas, Chun-Hao P. Huang, Otmar Hilliges, Michael J. Black
- Abstract summary: Part Attention REgressor learns to predict body-part-guided attention masks.
Code will be available for research purposes at https://pare.is.tue.mpg.de/.
- Score: 80.20146689494992
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite significant progress, we show that state of the art 3D human pose and
shape estimation methods remain sensitive to partial occlusion and can produce
dramatically wrong predictions although much of the body is observable. To
address this, we introduce a soft attention mechanism, called the Part
Attention REgressor (PARE), that learns to predict body-part-guided attention
masks. We observe that state-of-the-art methods rely on global feature
representations, making them sensitive to even small occlusions. In contrast,
PARE's part-guided attention mechanism overcomes these issues by exploiting
information about the visibility of individual body parts while leveraging
information from neighboring body-parts to predict occluded parts. We show
qualitatively that PARE learns sensible attention masks, and quantitative
evaluation confirms that PARE achieves more accurate and robust reconstruction
results than existing approaches on both occlusion-specific and standard
benchmarks. Code will be available for research purposes at
https://pare.is.tue.mpg.de/.
Related papers
- PAFormer: Part Aware Transformer for Person Re-identification [3.8004980982852214]
We introduce textbfPart Aware Transformer (PAFormer), a pose estimation based ReID model which can perform precise part-to-part comparison.
Our method outperforms existing approaches on well-known ReID benchmark datasets.
arXiv Detail & Related papers (2024-08-12T04:46:55Z) - PAFUSE: Part-based Diffusion for 3D Whole-Body Pose Estimation [20.38424513438315]
We introduce a novel approach for 3D whole-body pose estimation, addressing the challenge of scale- and deformability- variance across body parts.
In addition to addressing the challenge of exploiting motion in unevenly sampled data, we combine stable diffusion to hierarchical part representation.
On the H3WB dataset, our method greatly outperforms the current state of the art, which fails to exploit the temporal information.
arXiv Detail & Related papers (2024-07-14T14:24:05Z) - 3D WholeBody Pose Estimation based on Semantic Graph Attention Network and Distance Information [2.457872341625575]
A novel Semantic Graph Attention Network can benefit from the ability of self-attention to capture global context.
A Body Part Decoder assists in extracting and refining the information related to specific segments of the body.
A Geometry Loss makes a critical constraint on the structural skeleton of the body, ensuring that the model's predictions adhere to the natural limits of human posture.
arXiv Detail & Related papers (2024-06-03T10:59:00Z) - AiOS: All-in-One-Stage Expressive Human Pose and Shape Estimation [55.179287851188036]
We introduce a novel all-in-one-stage framework, AiOS, for expressive human pose and shape recovery without an additional human detection step.
We first employ a human token to probe a human location in the image and encode global features for each instance.
Then, we introduce a joint-related token to probe the human joint in the image and encoder a fine-grained local feature.
arXiv Detail & Related papers (2024-03-26T17:59:23Z) - Understanding Pose and Appearance Disentanglement in 3D Human Pose
Estimation [72.50214227616728]
Several methods have proposed to learn image representations in a self-supervised fashion so as to disentangle the appearance information from the pose one.
We study disentanglement from the perspective of the self-supervised network, via diverse image synthesis experiments.
We design an adversarial strategy focusing on generating natural appearance changes of the subject, and against which we could expect a disentangled network to be robust.
arXiv Detail & Related papers (2023-09-20T22:22:21Z) - BoPR: Body-aware Part Regressor for Human Shape and Pose Estimation [16.38936587088618]
Our proposed method BoPR, the Body-aware Part Regressor, first extracts features of both the body and part regions using an attention-guided mechanism.
We then utilize these features to encode extra part-body dependency for per-part regression, with part features as queries and body feature as a reference.
arXiv Detail & Related papers (2023-03-21T08:36:59Z) - Unsupervised 3D Keypoint Discovery with Multi-View Geometry [104.76006413355485]
We propose an algorithm that learns to discover 3D keypoints on human bodies from multiple-view images without supervision or labels.
Our approach discovers more interpretable and accurate 3D keypoints compared to other state-of-the-art unsupervised approaches.
arXiv Detail & Related papers (2022-11-23T10:25:12Z) - Quality-aware Part Models for Occluded Person Re-identification [77.24920810798505]
Occlusion poses a major challenge for person re-identification (ReID)
Existing approaches typically rely on outside tools to infer visible body parts, which may be suboptimal in terms of both computational efficiency and ReID accuracy.
We propose a novel method named Quality-aware Part Models (QPM) for occlusion-robust ReID.
arXiv Detail & Related papers (2022-01-01T03:51:09Z) - Unsupervised Pose-Aware Part Decomposition for 3D Articulated Objects [68.73163598790255]
We propose PPD (unsupervised Pose-aware Part Decomposition) to address a novel setting that explicitly targets man-made articulated objects with mechanical joints.
We show that category-common prior learning for both part shapes and poses facilitates the unsupervised learning of (1) part decomposition with non-primitive-based implicit representation, and (2) part pose as joint parameters under single-frame shape supervision.
arXiv Detail & Related papers (2021-10-08T23:53:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.