Can We Evaluate Domain Adaptation Models Without Target-Domain Labels?
- URL: http://arxiv.org/abs/2305.18712v3
- Date: Sun, 18 Feb 2024 14:14:50 GMT
- Title: Can We Evaluate Domain Adaptation Models Without Target-Domain Labels?
- Authors: Jianfei Yang, Hanjie Qian, Yuecong Xu, Kai Wang, Lihua Xie
- Abstract summary: Unsupervised domain adaptation (UDA) involves adapting a model trained on a label-rich source domain to an unlabeled target domain.
In real-world scenarios, the absence of target-domain labels makes it challenging to evaluate the performance of UDA models.
We propose a novel metric called the textitTransfer Score to address these issues.
- Score: 36.05871459064825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (UDA) involves adapting a model trained on a
label-rich source domain to an unlabeled target domain. However, in real-world
scenarios, the absence of target-domain labels makes it challenging to evaluate
the performance of UDA models. Furthermore, prevailing UDA methods relying on
adversarial training and self-training could lead to model degeneration and
negative transfer, further exacerbating the evaluation problem. In this paper,
we propose a novel metric called the \textit{Transfer Score} to address these
issues. The proposed metric enables the unsupervised evaluation of UDA models
by assessing the spatial uniformity of the classifier via model parameters, as
well as the transferability and discriminability of deep representations. Based
on the metric, we achieve three novel objectives without target-domain labels:
(1) selecting the best UDA method from a range of available options, (2)
optimizing hyperparameters of UDA models to prevent model degeneration, and (3)
identifying which checkpoint of UDA model performs optimally. Our work bridges
the gap between data-level UDA research and practical UDA scenarios, enabling a
realistic assessment of UDA model performance. We validate the effectiveness of
our metric through extensive empirical studies on UDA datasets of different
scales and imbalanced distributions. The results demonstrate that our metric
robustly achieves the aforementioned goals.
Related papers
- Unveiling the Superior Paradigm: A Comparative Study of Source-Free Domain Adaptation and Unsupervised Domain Adaptation [52.36436121884317]
We show that Source-Free Domain Adaptation (SFDA) generally outperforms Unsupervised Domain Adaptation (UDA) in real-world scenarios.
SFDA offers advantages in time efficiency, storage requirements, targeted learning objectives, reduced risk of negative transfer, and increased robustness against overfitting.
We propose a novel weight estimation method that effectively integrates available source data into multi-SFDA approaches.
arXiv Detail & Related papers (2024-11-24T13:49:29Z) - Unsupervised Domain Adaptation Via Data Pruning [0.0]
We consider the problem from the perspective of unsupervised domain adaptation (UDA)
We propose AdaPrune, a method for UDA whereby training examples are removed to attempt to align the training distribution to that of the target data.
As a method for UDA, we show that AdaPrune outperforms related techniques, and is complementary to other UDA algorithms such as CORAL.
arXiv Detail & Related papers (2024-09-18T15:48:59Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - SRoUDA: Meta Self-training for Robust Unsupervised Domain Adaptation [25.939292305808934]
Unsupervised domain adaptation (UDA) can transfer knowledge learned from rich-label dataset to unlabeled target dataset.
In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models.
arXiv Detail & Related papers (2022-12-12T14:25:40Z) - Exploring Adversarially Robust Training for Unsupervised Domain
Adaptation [71.94264837503135]
Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain.
This paper explores how to enhance the unlabeled data robustness via AT while learning domain-invariant features for UDA.
We propose a novel Adversarially Robust Training method for UDA accordingly, referred to as ARTUDA.
arXiv Detail & Related papers (2022-02-18T17:05:19Z) - UMAD: Universal Model Adaptation under Domain and Category Shift [138.12678159620248]
Universal Model ADaptation (UMAD) framework handles both UDA scenarios without access to source data.
We develop an informative consistency score to help distinguish unknown samples from known samples.
Experiments on open-set and open-partial-set UDA scenarios demonstrate that UMAD exhibits comparable, if not superior, performance to state-of-the-art data-dependent methods.
arXiv Detail & Related papers (2021-12-16T01:22:59Z) - Adversarial Robustness for Unsupervised Domain Adaptation [48.51898925429575]
In this work, we leverage intermediate representations learned by multiple robust ImageNet models to improve the robustness of UDA models.
Our method works by aligning the features of the UDA model with the robust features learned by ImageNet pre-trained models along with domain adaptation training.
arXiv Detail & Related papers (2021-09-02T13:45:01Z) - UDALM: Unsupervised Domain Adaptation through Language Modeling [79.73916345178415]
We introduce UDALM, a fine-tuning procedure, using a mixed classification and Masked Language Model loss.
Our experiments show that performance of models trained with the mixed loss scales with the amount of available target data can be effectively used as a stopping criterion.
Our method is evaluated on twelve domain pairs of the Amazon Reviews Sentiment dataset, yielding $91.74%$ accuracy, which is an $1.11%$ absolute improvement over the state-of-versathe-art.
arXiv Detail & Related papers (2021-04-14T19:05:01Z) - Robustified Domain Adaptation [13.14535125302501]
Unsupervised domain adaptation (UDA) is widely used to transfer knowledge from a labeled source domain to an unlabeled target domain.
The inevitable domain distribution deviation in UDA is a critical barrier to model robustness on the target domain.
We propose a novel Class-consistent Unsupervised Domain Adaptation (CURDA) framework for training robust UDA models.
arXiv Detail & Related papers (2020-11-18T22:21:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.