Better Practices for Domain Adaptation
- URL: http://arxiv.org/abs/2309.03879v1
- Date: Thu, 7 Sep 2023 17:44:18 GMT
- Title: Better Practices for Domain Adaptation
- Authors: Linus Ericsson, Da Li and Timothy M. Hospedales
- Abstract summary: Domain adaptation (DA) aims to provide frameworks for adapting models to deployment data without using labels.
Unclear validation protocol for DA has led to bad practices in the literature.
We show challenges across all three branches of domain adaptation methodology.
- Score: 62.70267990659201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distribution shifts are all too common in real-world applications of machine
learning. Domain adaptation (DA) aims to address this by providing various
frameworks for adapting models to the deployment data without using labels.
However, the domain shift scenario raises a second more subtle challenge: the
difficulty of performing hyperparameter optimisation (HPO) for these adaptation
algorithms without access to a labelled validation set. The unclear validation
protocol for DA has led to bad practices in the literature, such as performing
HPO using the target test labels when, in real-world scenarios, they are not
available. This has resulted in over-optimism about DA research progress
compared to reality. In this paper, we analyse the state of DA when using good
evaluation practice, by benchmarking a suite of candidate validation criteria
and using them to assess popular adaptation algorithms. We show that there are
challenges across all three branches of domain adaptation methodology including
Unsupervised Domain Adaptation (UDA), Source-Free Domain Adaptation (SFDA), and
Test Time Adaptation (TTA). While the results show that realistically
achievable performance is often worse than expected, they also show that using
proper validation splits is beneficial, as well as showing that some previously
unexplored validation metrics provide the best options to date. Altogether, our
improved practices covering data, training, validation and hyperparameter
optimisation form a new rigorous pipeline to improve benchmarking, and hence
research progress, within this important field going forward.
Related papers
- SKADA-Bench: Benchmarking Unsupervised Domain Adaptation Methods with Realistic Validation [55.87169702896249]
Unsupervised Domain Adaptation (DA) consists of adapting a model trained on a labeled source domain to perform well on an unlabeled target domain with some data distribution shift.
We propose a framework to evaluate DA methods and present a fair evaluation of existing shallow algorithms, including reweighting, mapping, and subspace alignment.
Our benchmark highlights the importance of realistic validation and provides practical guidance for real-life applications.
arXiv Detail & Related papers (2024-07-16T12:52:29Z) - What, How, and When Should Object Detectors Update in Continually
Changing Test Domains? [34.13756022890991]
Test-time adaptation algorithms have been proposed to adapt the model online while inferring test data.
We propose a novel online adaption approach for object detection in continually changing test domains.
Our approach surpasses baselines on widely used benchmarks, achieving improvements of up to 4.9%p and 7.9%p in mAP.
arXiv Detail & Related papers (2023-12-12T07:13:08Z) - On Pitfalls of Test-Time Adaptation [82.8392232222119]
Test-Time Adaptation (TTA) has emerged as a promising approach for tackling the robustness challenge under distribution shifts.
We present TTAB, a test-time adaptation benchmark that encompasses ten state-of-the-art algorithms, a diverse array of distribution shifts, and two evaluation protocols.
arXiv Detail & Related papers (2023-06-06T09:35:29Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Tune it the Right Way: Unsupervised Validation of Domain Adaptation via
Soft Neighborhood Density [125.64297244986552]
We propose an unsupervised validation criterion that measures the density of soft neighborhoods by computing the entropy of the similarity distribution between points.
Our criterion is simpler than competing validation methods, yet more effective.
arXiv Detail & Related papers (2021-08-24T17:41:45Z) - VisDA-2021 Competition Universal Domain Adaptation to Improve
Performance on Out-of-Distribution Data [64.91713686654805]
The Visual Domain Adaptation (VisDA) 2021 competition tests models' ability to adapt to novel test distributions.
We will evaluate adaptation to novel viewpoints, backgrounds, modalities and degradation in quality.
Performance will be measured using a rigorous protocol, comparing to state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-07-23T03:21:51Z) - Self-Domain Adaptation for Face Anti-Spoofing [31.441928816043536]
We propose a self-domain adaptation framework to leverage the unlabeled test domain data at inference.
A meta-learning based adaptor learning algorithm is proposed using the data of multiple source domains at the training step.
arXiv Detail & Related papers (2021-02-24T08:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.