D3GU: Multi-Target Active Domain Adaptation via Enhancing Domain
Alignment
- URL: http://arxiv.org/abs/2401.05465v1
- Date: Wed, 10 Jan 2024 13:45:51 GMT
- Title: D3GU: Multi-Target Active Domain Adaptation via Enhancing Domain
Alignment
- Authors: Lin Zhang and Linghan Xu and Saman Motamed and Shayok Chakraborty and
Fernando De la Torre
- Abstract summary: Multi-Target Active Domain Adaptation (MT-ADA) framework for image classification, named D3GU, is proposed.
D3GU applies De Domain Discrimination (D3) during training to achieve both source-target and target-target domain alignments.
Experiments on three benchmark datasets, Office31, OfficeHome, and DomainNet, have been conducted to validate consistently superior performance of D3GU for MT-ADA.
- Score: 58.23851910855917
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised domain adaptation (UDA) for image classification has made
remarkable progress in transferring classification knowledge from a labeled
source domain to an unlabeled target domain, thanks to effective domain
alignment techniques. Recently, in order to further improve performance on a
target domain, many Single-Target Active Domain Adaptation (ST-ADA) methods
have been proposed to identify and annotate the salient and exemplar target
samples. However, it requires one model to be trained and deployed for each
target domain and the domain label associated with each test sample. This
largely restricts its application in the ubiquitous scenarios with multiple
target domains. Therefore, we propose a Multi-Target Active Domain Adaptation
(MT-ADA) framework for image classification, named D3GU, to simultaneously
align different domains and actively select samples from them for annotation.
This is the first research effort in this field to our best knowledge. D3GU
applies Decomposed Domain Discrimination (D3) during training to achieve both
source-target and target-target domain alignments. Then during active sampling,
a Gradient Utility (GU) score is designed to weight every unlabeled target
image by its contribution towards classification and domain alignment tasks,
and is further combined with KMeans clustering to form GU-KMeans for diverse
image sampling. Extensive experiments on three benchmark datasets, Office31,
OfficeHome, and DomainNet, have been conducted to validate consistently
superior performance of D3GU for MT-ADA.
Related papers
- Domain-Rectifying Adapter for Cross-Domain Few-Shot Segmentation [40.667166043101076]
We propose a small adapter for rectifying diverse target domain styles to the source domain.
The adapter is trained to rectify the image features from diverse synthesized target domains to align with the source domain.
Our method achieves promising results on cross-domain few-shot semantic segmentation tasks.
arXiv Detail & Related papers (2024-04-16T07:07:40Z) - Semi-supervised Domain Adaptation via Prototype-based Multi-level
Learning [4.232614032390374]
In semi-supervised domain adaptation (SSDA), a few labeled target samples of each class help the model to transfer knowledge representation from the fully labeled source domain to the target domain.
We propose a Prototype-based Multi-level Learning (ProML) framework to better tap the potential of labeled target samples.
arXiv Detail & Related papers (2023-05-04T10:09:30Z) - ADAS: A Simple Active-and-Adaptive Baseline for Cross-Domain 3D Semantic
Segmentation [38.66509154973051]
We propose an Active-and-Adaptive (ADAS) baseline to enhance the weak cross-domain generalization ability of a well-trained 3D segmentation model.
ADAS performs an active sampling operation to select a maximally-informative subset from both source and target domains for effective adaptation.
ADAS is verified to be effective in many cross-domain settings including: 1) Unsupervised Domain Adaptation (UDA), which means that all samples from target domain are unlabeled; 2) Unsupervised Few-shot Domain Adaptation (UFDA), which means that only a few unlabeled samples are available in the unlabeled target domain.
arXiv Detail & Related papers (2022-12-20T16:17:40Z) - Reiterative Domain Aware Multi-target Adaptation [14.352214079374463]
We propose Reiterative D-CGCT (RD-CGCT) that obtains better adaptation performance by reiterating multiple times over each target domain.
RD-CGCT significantly improves the performance over D-CGCT for Office-Home and Office31 datasets.
arXiv Detail & Related papers (2021-08-26T17:12:25Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining
and Consistency [93.89773386634717]
Visual domain adaptation involves learning to classify images from a target visual domain using labels available in a different source domain.
We show that in the presence of a few target labels, simple techniques like self-supervision (via rotation prediction) and consistency regularization can be effective without any adversarial alignment to learn a good target classifier.
Our Pretraining and Consistency (PAC) approach, can achieve state of the art accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets.
arXiv Detail & Related papers (2021-01-29T18:40:17Z) - Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation [67.83872616307008]
Unversarial Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain.
In this paper, we propose a novel Adrial Dual Distincts Network (AD$2$CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries.
To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment.
arXiv Detail & Related papers (2020-08-27T01:29:10Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.