Domain-Adaptive 2D Human Pose Estimation via Dual Teachers in Extremely Low-Light Conditions
- URL: http://arxiv.org/abs/2407.15451v2
- Date: Tue, 23 Jul 2024 07:22:53 GMT
- Title: Domain-Adaptive 2D Human Pose Estimation via Dual Teachers in Extremely Low-Light Conditions
- Authors: Yihao Ai, Yifei Qi, Bo Wang, Yu Cheng, Xinchao Wang, Robby T. Tan,
- Abstract summary: Recent studies on low-light pose estimation require the use of paired well-lit and low-light images with ground truths for training.
Our primary novelty lies in leveraging two complementary-teacher networks to generate more reliable pseudo labels.
Our method achieves 6.8% (2.4 AP) improvement over the state-of-the-art (SOTA) method.
- Score: 65.0109231252639
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing 2D human pose estimation research predominantly concentrates on well-lit scenarios, with limited exploration of poor lighting conditions, which are a prevalent aspect of daily life. Recent studies on low-light pose estimation require the use of paired well-lit and low-light images with ground truths for training, which are impractical due to the inherent challenges associated with annotation on low-light images. To this end, we introduce a novel approach that eliminates the need for low-light ground truths. Our primary novelty lies in leveraging two complementary-teacher networks to generate more reliable pseudo labels, enabling our model achieves competitive performance on extremely low-light images without the need for training with low-light ground truths. Our framework consists of two stages. In the first stage, our model is trained on well-lit data with low-light augmentations. In the second stage, we propose a dual-teacher framework to utilize the unlabeled low-light data, where a center-based main teacher produces the pseudo labels for relatively visible cases, while a keypoints-based complementary teacher focuses on producing the pseudo labels for the missed persons of the main teacher. With the pseudo labels from both teachers, we propose a person-specific low-light augmentation to challenge a student model in training to outperform the teachers. Experimental results on real low-light dataset (ExLPose-OCN) show, our method achieves 6.8% (2.4 AP) improvement over the state-of-the-art (SOTA) method, despite no low-light ground-truth data is used in our approach, in contrast to the SOTA method. Our code will be available at:https://github.com/ayh015-dev/DA-LLPose.
Related papers
- Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image Enhancement [59.17372460692809]
This work proposes a mean-teacher-based semi-supervised low-light enhancement (Semi-LLIE) framework that integrates the unpaired data into model training.
We introduce a semantic-aware contrastive loss to faithfully transfer the illumination distribution, contributing to enhancing images with natural colors.
We also propose novel perceptive loss based on the large-scale vision-language Recognize Anything Model (RAM) to help generate enhanced images with richer textual details.
arXiv Detail & Related papers (2024-09-25T04:05:32Z) - Effective Whole-body Pose Estimation with Two-stages Distillation [52.92064408970796]
Whole-body pose estimation localizes the human body, hand, face, and foot keypoints in an image.
We present a two-stage pose textbfDistillation for textbfWhole-body textbfPose estimators, named textbfDWPose, to improve their effectiveness and efficiency.
arXiv Detail & Related papers (2023-07-29T03:49:28Z) - Human Pose Estimation in Extremely Low-Light Conditions [21.210706205233286]
We develop a dedicated camera system and build a new dataset of real low-light images with accurate pose labels.
Thanks to our camera system, each low-light image in our dataset is coupled with an aligned well-lit image, which enables accurate pose labeling.
We also propose a new model and a new training strategy that fully exploit the privileged information to learn representation insensitive to lighting conditions.
arXiv Detail & Related papers (2023-03-27T17:28:25Z) - 2PCNet: Two-Phase Consistency Training for Day-to-Night Unsupervised
Domain Adaptive Object Detection [30.114398123450236]
This paper proposes a two-phase consistency unsupervised domain adaptation network, 2PCNet, to address these issues.
Experiments on publicly available datasets demonstrate that our method achieves superior results to state-of-the-art methods by 20%.
arXiv Detail & Related papers (2023-03-24T08:22:41Z) - Semi-Supervised 2D Human Pose Estimation Driven by Position
Inconsistency Pseudo Label Correction Module [74.80776648785897]
The previous method ignored two problems: (i) When conducting interactive training between large model and lightweight model, the pseudo label of lightweight model will be used to guide large models.
We propose a semi-supervised 2D human pose estimation framework driven by a position inconsistency pseudo label correction module (SSPCM)
To further improve the performance of the student model, we use the semi-supervised Cut-Occlude based on pseudo keypoint perception to generate more hard and effective samples.
arXiv Detail & Related papers (2023-03-08T02:57:05Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.