Prior-guided Source-free Domain Adaptation for Human Pose Estimation
- URL: http://arxiv.org/abs/2308.13954v1
- Date: Sat, 26 Aug 2023 20:30:04 GMT
- Title: Prior-guided Source-free Domain Adaptation for Human Pose Estimation
- Authors: Dripta S. Raychaudhuri, Calvin-Khang Ta, Arindam Dutta, Rohit Lal,
Amit K. Roy-Chowdhury
- Abstract summary: Domain adaptation methods for 2D human pose estimation typically require continuous access to the source data.
We present Prior-guided Self-training (POST), a pseudo-labeling approach that builds on the popular Mean Teacher framework.
- Score: 24.50953879583841
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain adaptation methods for 2D human pose estimation typically require
continuous access to the source data during adaptation, which can be
challenging due to privacy, memory, or computational constraints. To address
this limitation, we focus on the task of source-free domain adaptation for pose
estimation, where a source model must adapt to a new target domain using only
unlabeled target data. Although recent advances have introduced source-free
methods for classification tasks, extending them to the regression task of pose
estimation is non-trivial. In this paper, we present Prior-guided Self-training
(POST), a pseudo-labeling approach that builds on the popular Mean Teacher
framework to compensate for the distribution shift. POST leverages
prediction-level and feature-level consistency between a student and teacher
model against certain image transformations. In the absence of source data,
POST utilizes a human pose prior that regularizes the adaptation process by
directing the model to generate more accurate and anatomically plausible pose
pseudo-labels. Despite being simple and intuitive, our framework can deliver
significant performance gains compared to applying the source model directly to
the target data, as demonstrated in our extensive experiments and ablation
studies. In fact, our approach achieves comparable performance to recent
state-of-the-art methods that use source data for adaptation.
Related papers
- Turn Down the Noise: Leveraging Diffusion Models for Test-time
Adaptation via Pseudo-label Ensembling [2.5437028043490084]
The goal of test-time adaptation is to adapt a source-pretrained model to a continuously changing target domain without relying on any source data.
We introduce an approach that leverages a pre-trained diffusion model to project the target domain images closer to the source domain.
arXiv Detail & Related papers (2023-11-29T20:35:32Z) - Feed-Forward Source-Free Domain Adaptation via Class Prototypes [3.5382535469099436]
We present a feed-forward approach that challenges the need for back-propagation based adaptation.
Our approach is based on computing prototypes of classes under the domain shift using a pre-trained model.
arXiv Detail & Related papers (2023-07-20T11:36:45Z) - Uncertainty-guided Source-free Domain Adaptation [77.3844160723014]
Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model.
We propose quantifying the uncertainty in the source model predictions and utilizing it to guide the target adaptation.
arXiv Detail & Related papers (2022-08-16T08:03:30Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Unsupervised Adaptation of Semantic Segmentation Models without Source
Data [14.66682099621276]
We consider the novel problem of unsupervised domain adaptation of source models, without access to the source data for semantic segmentation.
We propose a self-training approach to extract the knowledge from the source model.
Our framework is able to achieve significant performance gains compared to directly applying the source model on the target data.
arXiv Detail & Related papers (2021-12-04T15:13:41Z) - Source-Free Domain Adaptive Fundus Image Segmentation with Denoised
Pseudo-Labeling [56.98020855107174]
Domain adaptation typically requires to access source domain data to utilize their distribution information for domain alignment with the target data.
In many real-world scenarios, the source data may not be accessible during the model adaptation in the target domain due to privacy issue.
We present a novel denoised pseudo-labeling method for this problem, which effectively makes use of the source model and unlabeled target data.
arXiv Detail & Related papers (2021-09-19T06:38:21Z) - Learning Neural Models for Natural Language Processing in the Face of
Distributional Shift [10.990447273771592]
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications.
It builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time.
This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information.
It is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime
arXiv Detail & Related papers (2021-09-03T14:29:20Z) - A Curriculum-style Self-training Approach for Source-Free Semantic Segmentation [91.13472029666312]
We propose a curriculum-style self-training approach for source-free domain adaptive semantic segmentation.
Our method yields state-of-the-art performance on source-free semantic segmentation tasks for both synthetic-to-real and adverse conditions.
arXiv Detail & Related papers (2021-06-22T10:21:39Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.