Feed-Forward Source-Free Domain Adaptation via Class Prototypes
- URL: http://arxiv.org/abs/2307.10787v1
- Date: Thu, 20 Jul 2023 11:36:45 GMT
- Title: Feed-Forward Source-Free Domain Adaptation via Class Prototypes
- Authors: Ondrej Bohdal, Da Li, Timothy Hospedales
- Abstract summary: We present a feed-forward approach that challenges the need for back-propagation based adaptation.
Our approach is based on computing prototypes of classes under the domain shift using a pre-trained model.
- Score: 3.5382535469099436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Source-free domain adaptation has become popular because of its practical
usefulness and no need to access source data. However, the adaptation process
still takes a considerable amount of time and is predominantly based on
optimization that relies on back-propagation. In this work we present a simple
feed-forward approach that challenges the need for back-propagation based
adaptation. Our approach is based on computing prototypes of classes under the
domain shift using a pre-trained model. It achieves strong improvements in
accuracy compared to the pre-trained model and requires only a small fraction
of time of existing domain adaptation methods.
Related papers
- Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence [60.37934652213881]
Domain Adaptation (DA) facilitates knowledge transfer from a source domain to a related target domain.
This paper investigates a practical DA paradigm, namely Source data-Free Active Domain Adaptation (SFADA), where source data becomes inaccessible during adaptation.
We present learn from the learnt (LFTL), a novel paradigm for SFADA to leverage the learnt knowledge from the source pretrained model and actively iterated models without extra overhead.
arXiv Detail & Related papers (2024-07-26T17:51:58Z) - Progressive Conservative Adaptation for Evolving Target Domains [76.9274842289221]
Conventional domain adaptation typically transfers knowledge from a source domain to a stationary target domain.
Restoring and adapting to such target data results in escalating computational and resource consumption over time.
We propose a simple yet effective approach, termed progressive conservative adaptation (PCAda)
arXiv Detail & Related papers (2024-02-07T04:11:25Z) - Turn Down the Noise: Leveraging Diffusion Models for Test-time
Adaptation via Pseudo-label Ensembling [2.5437028043490084]
The goal of test-time adaptation is to adapt a source-pretrained model to a continuously changing target domain without relying on any source data.
We introduce an approach that leverages a pre-trained diffusion model to project the target domain images closer to the source domain.
arXiv Detail & Related papers (2023-11-29T20:35:32Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Prior-guided Source-free Domain Adaptation for Human Pose Estimation [24.50953879583841]
Domain adaptation methods for 2D human pose estimation typically require continuous access to the source data.
We present Prior-guided Self-training (POST), a pseudo-labeling approach that builds on the popular Mean Teacher framework.
arXiv Detail & Related papers (2023-08-26T20:30:04Z) - Few-Shot Adaptation of Pre-Trained Networks for Domain Shift [17.123505029637055]
Deep networks are prone to performance degradation when there is a domain shift between the source (training) data and target (test) data.
Recent test-time adaptation methods update batch normalization layers of pre-trained source models deployed in new target environments with streaming data to mitigate such performance degradation.
We propose a framework for few-shot domain adaptation to address the practical challenges of data-efficient adaptation.
arXiv Detail & Related papers (2022-05-30T16:49:59Z) - Learning Instance-Specific Adaptation for Cross-Domain Segmentation [79.61787982393238]
We propose a test-time adaptation method for cross-domain image segmentation.
Given a new unseen instance at test time, we adapt a pre-trained model by conducting instance-specific BatchNorm calibration.
arXiv Detail & Related papers (2022-03-30T17:59:45Z) - Adaptive Boosting for Domain Adaptation: Towards Robust Predictions in
Scene Segmentation [41.05407168312345]
Domain adaptation is to transfer the shared knowledge learned from the source domain to a new environment, i.e., target domain.
One common practice is to train the model on both labeled source-domain data and unlabeled target-domain data.
We propose one efficient bootstrapping method, called Adaboost Student, explicitly learning complementary models during training.
arXiv Detail & Related papers (2021-03-29T15:12:58Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z) - Don't Stop Pretraining: Adapt Language Models to Domains and Tasks [81.99843216550306]
We present a study across four domains (biomedical and computer science publications, news, and reviews) and eight classification tasks.
A second phase of pretraining in-domain (domain-adaptive pretraining) leads to performance gains.
Adapting to the task's unlabeled data (task-adaptive pretraining) improves performance even after domain-adaptive pretraining.
arXiv Detail & Related papers (2020-04-23T04:21:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.