Reiterative Domain Aware Multi-target Adaptation
- URL: http://arxiv.org/abs/2109.00919v1
- Date: Thu, 26 Aug 2021 17:12:25 GMT
- Title: Reiterative Domain Aware Multi-target Adaptation
- Authors: Sudipan Saha and Shan Zhao and Xiao Xiang Zhu
- Abstract summary: We propose Reiterative D-CGCT (RD-CGCT) that obtains better adaptation performance by reiterating multiple times over each target domain.
RD-CGCT significantly improves the performance over D-CGCT for Office-Home and Office31 datasets.
- Score: 14.352214079374463
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most domain adaptation methods focus on single-source-single-target
adaptation setting. Multi-target domain adaptation is a powerful extension in
which a single classifier is learned for multiple unlabeled target domains. To
build a multi-target classifier, it is crucial to effectively aggregate
features from the labeled source and different unlabeled target domains.
Towards this, recently introduced Domain-aware Curriculum Graph Co-Teaching
(D-CGCT) exploits dual classifier head, one of which is based on the graph
neural network. D-CGCT uses a sequential adaptation strategy that adapts one
domain at a time starting from the target domains that are more similar to the
source, assuming that the network finds it easier to adapt to such target
domains. However, we argue that there is no easier domain or difficult domain
in absolute sense and each domain can have samples showing different
characteristics. Following this cue, we propose Reiterative D-CGCT (RD-CGCT)
that obtains better adaptation performance by reiterating multiple times over
each target domain, while keeping the total number of iterations as same.
RD-CGCT further improves the adaptation performance by considering more source
samples than training samples in the training minibatch. Proposed RD-CGCT
significantly improves the performance over D-CGCT for Office-Home and Office31
datasets.
Related papers
- Domain-Rectifying Adapter for Cross-Domain Few-Shot Segmentation [40.667166043101076]
We propose a small adapter for rectifying diverse target domain styles to the source domain.
The adapter is trained to rectify the image features from diverse synthesized target domains to align with the source domain.
Our method achieves promising results on cross-domain few-shot semantic segmentation tasks.
arXiv Detail & Related papers (2024-04-16T07:07:40Z) - Strong-Weak Integrated Semi-supervision for Unsupervised Single and
Multi Target Domain Adaptation [6.472434306724611]
Unsupervised domain adaptation (UDA) focuses on transferring knowledge learned in the labeled source domain to the unlabeled target domain.
In this paper, we propose a novel strong-weak integrated semi-supervision (SWISS) learning strategy for image classification.
arXiv Detail & Related papers (2023-09-12T19:08:54Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Student Become Decathlon Master in Retinal Vessel Segmentation via
Dual-teacher Multi-target Domain Adaptation [1.121358474059223]
We propose RVms, a novel unsupervised multi-target domain adaptation approach to segment retinal vessels (RVs) from multimodal and multicenter retinal images.
RVms is found to be very close to the target-trained Oracle in terms of segmenting the RVs, largely outperforming other state-of-the-art methods.
arXiv Detail & Related papers (2022-03-07T02:20:14Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Multi-Target Domain Adaptation with Collaborative Consistency Learning [105.7615147382486]
We propose a collaborative learning framework to achieve unsupervised multi-target domain adaptation.
The proposed method can effectively exploit rich structured information contained in both labeled source domain and multiple unlabeled target domains.
arXiv Detail & Related papers (2021-06-07T08:36:20Z) - Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation [78.28390172958643]
We identify two key aspects that can help to alleviate multiple domain-shifts in the multi-target domain adaptation (MTDA)
We propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains.
When the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones.
arXiv Detail & Related papers (2021-04-01T23:41:41Z) - Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining
and Consistency [93.89773386634717]
Visual domain adaptation involves learning to classify images from a target visual domain using labels available in a different source domain.
We show that in the presence of a few target labels, simple techniques like self-supervision (via rotation prediction) and consistency regularization can be effective without any adversarial alignment to learn a good target classifier.
Our Pretraining and Consistency (PAC) approach, can achieve state of the art accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets.
arXiv Detail & Related papers (2021-01-29T18:40:17Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.