Towards Accurate Cross-Domain In-Bed Human Pose Estimation
- URL: http://arxiv.org/abs/2110.03578v1
- Date: Thu, 7 Oct 2021 15:54:46 GMT
- Title: Towards Accurate Cross-Domain In-Bed Human Pose Estimation
- Authors: Mohamed Afham, Udith Haputhanthri, Jathurshan Pradeepkumar, Mithunjha
Anandakumar, Ashwin De Silva, Chamira Edussooriya
- Abstract summary: Long-wavelength infrared (LWIR) modality based pose estimation algorithms overcome the aforementioned challenges.
We propose a novel learning strategy comprises of two-fold data augmentation to reduce the cross-domain discrepancy.
Our experiments and analysis show the effectiveness of our approach over multiple standard human pose estimation baselines.
- Score: 3.685548851716087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human behavioral monitoring during sleep is essential for various medical
applications. Majority of the contactless human pose estimation algorithms are
based on RGB modality, causing ineffectiveness in in-bed pose estimation due to
occlusions by blankets and varying illumination conditions. Long-wavelength
infrared (LWIR) modality based pose estimation algorithms overcome the
aforementioned challenges; however, ground truth pose generations by a human
annotator under such conditions are not feasible. A feasible solution to
address this issue is to transfer the knowledge learned from images with pose
labels and no occlusions, and adapt it towards real world conditions
(occlusions due to blankets). In this paper, we propose a novel learning
strategy comprises of two-fold data augmentation to reduce the cross-domain
discrepancy and knowledge distillation to learn the distribution of unlabeled
images in real world conditions. Our experiments and analysis show the
effectiveness of our approach over multiple standard human pose estimation
baselines.
Related papers
- Occluded Human Pose Estimation based on Limb Joint Augmentation [14.36131862057872]
We propose an occluded human pose estimation framework based on limb joint augmentation to enhance the generalization ability of the pose estimation model on the occluded human bodies.
To further enhance the localization ability of the model, this paper constructs a dynamic structure loss function based on limb graphs to explore the distribution of occluded joints.
arXiv Detail & Related papers (2024-10-13T15:48:24Z) - In-Bed Pose Estimation: A Review [8.707107668375906]
In-bed pose estimation can be used to monitor a person's sleep behavior and detect symptoms early for potential disease diagnosis.
Several studies have utilized unimodal and multimodal methods to estimate in-bed human poses.
Our objectives are to show the limitations of the previous studies, current challenges, and provide insights for future works on the in-bed human pose estimation field.
arXiv Detail & Related papers (2024-02-01T15:57:11Z) - Unsupervised Domain Adaptation for Low-dose CT Reconstruction via Bayesian Uncertainty Alignment [32.632944734192435]
Low-dose computed tomography (LDCT) image reconstruction techniques can reduce patient radiation exposure while maintaining acceptable imaging quality.
Deep learning is widely used in this problem, but the performance of testing data is often degraded in clinical scenarios.
Unsupervised domain adaptation (UDA) of LDCT reconstruction has been proposed to solve this problem through distribution alignment.
arXiv Detail & Related papers (2023-02-26T07:10:09Z) - Anatomy-guided domain adaptation for 3D in-bed human pose estimation [62.3463429269385]
3D human pose estimation is a key component of clinical monitoring systems.
We present a novel domain adaptation method, adapting a model from a labeled source to a shifted unlabeled target domain.
Our method consistently outperforms various state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-11-22T11:34:51Z) - Aligning Silhouette Topology for Self-Adaptive 3D Human Pose Recovery [70.66865453410958]
Articulation-centric 2D/3D pose supervision forms the core training objective in most existing 3D human pose estimation techniques.
We propose a novel framework that relies only on silhouette supervision to adapt a source-trained model-based regressor.
We develop a series of convolution-friendly spatial transformations in order to disentangle a topological-skeleton representation from the raw silhouette.
arXiv Detail & Related papers (2022-04-04T06:58:15Z) - Direct Dense Pose Estimation [138.56533828316833]
Dense human pose estimation is the problem of learning dense correspondences between RGB images and the surfaces of human bodies.
Prior dense pose estimation methods are all based on Mask R-CNN framework and operate in a top-down manner of first attempting to identify a bounding box for each person.
We propose a novel alternative method for solving the dense pose estimation problem, called Direct Dense Pose (DDP)
arXiv Detail & Related papers (2022-04-04T06:14:38Z) - Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose
Estimation [70.32536356351706]
We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations.
We derive suitable measures to quantify prediction uncertainty at both pose and joint level.
We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-03-29T07:14:58Z) - Privacy-Preserving In-Bed Pose Monitoring: A Fusion and Reconstruction
Study [9.474452908573111]
We explore the effective use of images from multiple non-visual and privacy-preserving modalities for the task of in-bed pose estimation.
First, we explore the effective fusion of information from different imaging modalities for better pose estimation.
Secondly, we propose a framework that can estimate in-bed pose estimation when visible images are unavailable.
arXiv Detail & Related papers (2022-02-22T07:24:21Z) - Single Image Human Proxemics Estimation for Visual Social Distancing [37.84559773949066]
We propose a semi-automatic solution to approximate the homography matrix between the scene ground and image plane.
We then leverage an off-the-shelf pose detector to detect body poses on the image and to reason upon their inter-personal distances.
arXiv Detail & Related papers (2020-11-03T21:49:13Z) - Appearance Consensus Driven Self-Supervised Human Mesh Recovery [67.20942777949793]
We present a self-supervised human mesh recovery framework to infer human pose and shape from monocular images.
We achieve state-of-the-art results on the standard model-based 3D pose estimation benchmarks.
The resulting colored mesh prediction opens up the usage of our framework for a variety of appearance-related tasks beyond the pose and shape estimation.
arXiv Detail & Related papers (2020-08-04T05:40:39Z) - Multi-person 3D Pose Estimation in Crowded Scenes Based on Multi-View
Geometry [62.29762409558553]
Epipolar constraints are at the core of feature matching and depth estimation in multi-person 3D human pose estimation methods.
Despite the satisfactory performance of this formulation in sparser crowd scenes, its effectiveness is frequently challenged under denser crowd circumstances.
In this paper, we depart from the multi-person 3D pose estimation formulation, and instead reformulate it as crowd pose estimation.
arXiv Detail & Related papers (2020-07-21T17:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.