BoPR: Body-aware Part Regressor for Human Shape and Pose Estimation
- URL: http://arxiv.org/abs/2303.11675v2
- Date: Fri, 24 Mar 2023 08:41:24 GMT
- Title: BoPR: Body-aware Part Regressor for Human Shape and Pose Estimation
- Authors: Yongkang Cheng, Shaoli Huang, Jifeng Ning, Ying Shan
- Abstract summary: Our proposed method BoPR, the Body-aware Part Regressor, first extracts features of both the body and part regions using an attention-guided mechanism.
We then utilize these features to encode extra part-body dependency for per-part regression, with part features as queries and body feature as a reference.
- Score: 16.38936587088618
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel approach for estimating human body shape and pose
from monocular images that effectively addresses the challenges of occlusions
and depth ambiguity. Our proposed method BoPR, the Body-aware Part Regressor,
first extracts features of both the body and part regions using an
attention-guided mechanism. We then utilize these features to encode extra
part-body dependency for per-part regression, with part features as queries and
body feature as a reference. This allows our network to infer the spatial
relationship of occluded parts with the body by leveraging visible parts and
body reference information. Our method outperforms existing state-of-the-art
methods on two benchmark datasets, and our experiments show that it
significantly surpasses existing methods in terms of depth ambiguity and
occlusion handling. These results provide strong evidence of the effectiveness
of our approach.The code and data are available for research purposes at
https://github.com/cyk990422/BoPR.
Related papers
- PAFormer: Part Aware Transformer for Person Re-identification [3.8004980982852214]
We introduce textbfPart Aware Transformer (PAFormer), a pose estimation based ReID model which can perform precise part-to-part comparison.
Our method outperforms existing approaches on well-known ReID benchmark datasets.
arXiv Detail & Related papers (2024-08-12T04:46:55Z) - Divide and Fuse: Body Part Mesh Recovery from Partially Visible Human Images [57.479339658504685]
"Divide and Fuse" strategy reconstructs human body parts independently before fusing them.
Human Part Parametric Models (HPPM) independently reconstruct the mesh from a few shape and global-location parameters.
A specially designed fusion module seamlessly integrates the reconstructed parts, even when only a few are visible.
arXiv Detail & Related papers (2024-07-12T21:29:11Z) - 3D WholeBody Pose Estimation based on Semantic Graph Attention Network and Distance Information [2.457872341625575]
A novel Semantic Graph Attention Network can benefit from the ability of self-attention to capture global context.
A Body Part Decoder assists in extracting and refining the information related to specific segments of the body.
A Geometry Loss makes a critical constraint on the structural skeleton of the body, ensuring that the model's predictions adhere to the natural limits of human posture.
arXiv Detail & Related papers (2024-06-03T10:59:00Z) - AiOS: All-in-One-Stage Expressive Human Pose and Shape Estimation [55.179287851188036]
We introduce a novel all-in-one-stage framework, AiOS, for expressive human pose and shape recovery without an additional human detection step.
We first employ a human token to probe a human location in the image and encode global features for each instance.
Then, we introduce a joint-related token to probe the human joint in the image and encoder a fine-grained local feature.
arXiv Detail & Related papers (2024-03-26T17:59:23Z) - Reconstructing 3D Human Pose from RGB-D Data with Occlusions [11.677978425905096]
We propose a new method to reconstruct the 3D human body from RGB-D images with occlusions.
To reconstruct a semantically and physically plausible human body, we propose to reduce the solution space based on scene information and prior knowledge.
We conducted experiments on the PROX dataset, and the results demonstrate that our method produces more accurate and plausible results compared with other methods.
arXiv Detail & Related papers (2023-10-02T14:16:13Z) - Body Part-Based Representation Learning for Occluded Person
Re-Identification [102.27216744301356]
Occluded person re-identification (ReID) is a person retrieval task which aims at matching occluded person images with holistic ones.
Part-based methods have been shown beneficial as they offer fine-grained information and are well suited to represent partially visible human bodies.
We propose BPBreID, a body part-based ReID model for solving the above issues.
arXiv Detail & Related papers (2022-11-07T16:48:41Z) - KTN: Knowledge Transfer Network for Learning Multi-person 2D-3D
Correspondences [77.56222946832237]
We present a novel framework to detect the densepose of multiple people in an image.
The proposed method, which we refer to Knowledge Transfer Network (KTN), tackles two main problems.
It simultaneously maintains feature resolution and suppresses background pixels, and this strategy results in substantial increase in accuracy.
arXiv Detail & Related papers (2022-06-21T03:11:37Z) - PARE: Part Attention Regressor for 3D Human Body Estimation [80.20146689494992]
Part Attention REgressor learns to predict body-part-guided attention masks.
Code will be available for research purposes at https://pare.is.tue.mpg.de/.
arXiv Detail & Related papers (2021-04-17T12:42:56Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.