AggPose: Deep Aggregation Vision Transformer for Infant Pose Estimation
- URL: http://arxiv.org/abs/2205.05277v1
- Date: Wed, 11 May 2022 05:34:14 GMT
- Title: AggPose: Deep Aggregation Vision Transformer for Infant Pose Estimation
- Authors: Xu Cao, Xiaoye Li, Liya Ma, Yi Huang, Xuan Feng, Zening Chen, Hongwu
Zeng, Jianguo Cao
- Abstract summary: We propose infant pose dataset and Deep Aggregation Vision Transformer for human pose estimation.
AggPose is a fast trained full transformer framework without using convolution operations to extract features in the early stages.
We show that AggPose could effectively learn the multi-scale features among different resolutions and significantly improve the performance of infant pose estimation.
- Score: 6.9000851935487075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Movement and pose assessment of newborns lets experienced pediatricians
predict neurodevelopmental disorders, allowing early intervention for related
diseases. However, most of the newest AI approaches for human pose estimation
methods focus on adults, lacking publicly benchmark for infant pose estimation.
In this paper, we fill this gap by proposing infant pose dataset and Deep
Aggregation Vision Transformer for human pose estimation, which introduces a
fast trained full transformer framework without using convolution operations to
extract features in the early stages. It generalizes Transformer + MLP to
high-resolution deep layer aggregation within feature maps, thus enabling
information fusion between different vision levels. We pre-train AggPose on
COCO pose dataset and apply it on our newly released large-scale infant pose
estimation dataset. The results show that AggPose could effectively learn the
multi-scale features among different resolutions and significantly improve the
performance of infant pose estimation. We show that AggPose outperforms hybrid
model HRFormer and TokenPose in the infant pose estimation dataset. Moreover,
our AggPose outperforms HRFormer by 0.7% AP on COCO val pose estimation on
average. Our code is available at github.com/SZAR-LAB/AggPose.
Related papers
- Comparison of marker-less 2D image-based methods for infant pose estimation [2.7726930707973048]
The General Movement Assessment (GMA) is a video-based tool to classify infant motor functioning.
We compare the performance of available generic- and infant-pose estimators, and the choice of viewing angle for optimal recordings.
The results show that the best performing generic model trained on adults, ViTPose, also performs best on infants.
arXiv Detail & Related papers (2024-10-07T12:21:49Z) - Under the Cover Infant Pose Estimation using Multimodal Data [0.0]
We present a novel dataset, Simultaneously-collected multimodal Mannequin Lying pose (SMaL) dataset, for under the cover infant pose estimation.
We successfully infer full body pose under the cover by training state-of-art pose estimation methods.
Our best performing model was able to detect joints under the cover within 25mm 86% of the time with an overall mean error of 16.9mm.
arXiv Detail & Related papers (2022-10-03T00:34:45Z) - Bottom-Up 2D Pose Estimation via Dual Anatomical Centers for Small-Scale
Persons [75.86463396561744]
In multi-person 2D pose estimation, the bottom-up methods simultaneously predict poses for all persons.
Our method achieves 38.4% improvement on bounding box precision and 39.1% improvement on bounding box recall over the state of the art (SOTA)
For the human pose AP evaluation, we achieve a new SOTA (71.0 AP) on the COCO test-dev set with the single-scale testing.
arXiv Detail & Related papers (2022-08-25T10:09:10Z) - ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation [76.35955924137986]
We show that a plain vision transformer with MAE pretraining can obtain superior performance after finetuning on human pose estimation datasets.
Our biggest ViTPose model based on the ViTAE-G backbone with 1 billion parameters obtains the best 80.9 mAP on the MS COCO test-dev set.
arXiv Detail & Related papers (2022-04-26T17:55:04Z) - Unsupervised Human Pose Estimation through Transforming Shape Templates [2.729524133721473]
We present a novel method for learning pose estimators for human adults and infants in an unsupervised fashion.
We demonstrate the effectiveness of our approach on two different datasets including adults and infants.
arXiv Detail & Related papers (2021-05-10T07:15:56Z) - End-to-End Trainable Multi-Instance Pose Estimation with Transformers [68.93512627479197]
We propose a new end-to-end trainable approach for multi-instance pose estimation by combining a convolutional neural network with a transformer.
Inspired by recent work on end-to-end trainable object detection with transformers, we use a transformer encoder-decoder architecture together with a bipartite matching scheme to directly regress the pose of all individuals in a given image.
Our model, called POse Estimation Transformer (POET), is trained using a novel set-based global loss that consists of a keypoint loss, a keypoint visibility loss, a center loss and a class loss.
arXiv Detail & Related papers (2021-03-22T18:19:22Z) - AdaFuse: Adaptive Multiview Fusion for Accurate Human Pose Estimation in
the Wild [77.43884383743872]
We present AdaFuse, an adaptive multiview fusion method to enhance the features in occluded views.
We extensively evaluate the approach on three public datasets including Human3.6M, Total Capture and CMU Panoptic.
We also create a large scale synthetic dataset Occlusion-Person, which allows us to perform numerical evaluation on the occluded joints.
arXiv Detail & Related papers (2020-10-26T03:19:46Z) - Invariant Representation Learning for Infant Pose Estimation with Small
Data [14.91506452479778]
We release a hybrid synthetic and real infant pose dataset with small yet diverse real images as well as generated synthetic infant poses.
In our ablation study, with identical network structure, models trained on SyRIP dataset show noticeable improvement over the ones trained on the only other public infant pose datasets.
One of our best infant pose estimation performers on the state-of-the-art DarkPose model shows mean average precision (mAP) of 93.6.
arXiv Detail & Related papers (2020-10-13T01:10:14Z) - Bottom-Up Human Pose Estimation by Ranking Heatmap-Guided Adaptive
Keypoint Estimates [76.51095823248104]
We present several schemes that are rarely or unthoroughly studied before for improving keypoint detection and grouping (keypoint regression) performance.
First, we exploit the keypoint heatmaps for pixel-wise keypoint regression instead of separating them for improving keypoint regression.
Second, we adopt a pixel-wise spatial transformer network to learn adaptive representations for handling the scale and orientation variance.
Third, we present a joint shape and heatvalue scoring scheme to promote the estimated poses that are more likely to be true poses.
arXiv Detail & Related papers (2020-06-28T01:14:59Z) - Preterm infants' pose estimation with spatio-temporal features [7.054093620465401]
This paper introduces the use of preterm-temporal features for limb detection and tracking.
It is the first study to use depth videos acquired in the actual clinical practice for limb-pose estimation.
arXiv Detail & Related papers (2020-05-08T09:51:22Z) - Anatomy-aware 3D Human Pose Estimation with Bone-based Pose
Decomposition [92.99291528676021]
Instead of directly regressing the 3D joint locations, we decompose the task into bone direction prediction and bone length prediction.
Our motivation is the fact that the bone lengths of a human skeleton remain consistent across time.
Our full model outperforms the previous best results on Human3.6M and MPI-INF-3DHP datasets.
arXiv Detail & Related papers (2020-02-24T15:49:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.