Towards Corruption-Agnostic Robust Domain Adaptation
- URL: http://arxiv.org/abs/2104.10376v1
- Date: Wed, 21 Apr 2021 06:27:48 GMT
- Title: Towards Corruption-Agnostic Robust Domain Adaptation
- Authors: Yifan Xu, Kekai Sheng, Weiming Dong, Baoyuan Wu, Changsheng Xu,
Bao-Gang Hu
- Abstract summary: We investigate a new task, Corruption-agnostic Robust Domain Adaptation (CRDA): to be accurate on original data and robust against unavailable-for-training corruptions on target domains.
We propose a new approach based on two technical insights into CRDA: 1) an easy-to-plug module called Domain Discrepancy Generator (DDG) that generates samples that enlarge domain discrepancy to mimic unpredictable corruptions; 2) a simple but effective teacher-student scheme with contrastive loss to enhance the constraints on target domains.
- Score: 76.66523954277945
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Big progress has been achieved in domain adaptation in decades. Existing
works are always based on an ideal assumption that testing target domain are
i.i.d. with training target domains. However, due to unpredictable corruptions
(e.g., noise and blur) in real data like web images, domain adaptation methods
are increasingly required to be corruption robust on target domains. In this
paper, we investigate a new task, Corruption-agnostic Robust Domain Adaptation
(CRDA): to be accurate on original data and robust against
unavailable-for-training corruptions on target domains. This task is
non-trivial due to large domain discrepancy and unsupervised target domains. We
observe that simple combinations of popular methods of domain adaptation and
corruption robustness have sub-optimal CRDA results. We propose a new approach
based on two technical insights into CRDA: 1) an easy-to-plug module called
Domain Discrepancy Generator (DDG) that generates samples that enlarge domain
discrepancy to mimic unpredictable corruptions; 2) a simple but effective
teacher-student scheme with contrastive loss to enhance the constraints on
target domains. Experiments verify that DDG keeps or even improves performance
on original data and achieves better corruption robustness that baselines.
Related papers
- GrabDAE: An Innovative Framework for Unsupervised Domain Adaptation Utilizing Grab-Mask and Denoise Auto-Encoder [16.244871317281614]
Unsupervised Domain Adaptation (UDA) aims to adapt a model trained on a labeled source domain to an unlabeled target domain by addressing the domain shift.
We introduce GrabDAE, an innovative UDA framework designed to tackle domain shift in visual classification tasks.
arXiv Detail & Related papers (2024-10-10T15:19:57Z) - Gradual Domain Adaptation: Theory and Algorithms [15.278170387810409]
Unsupervised domain adaptation (UDA) adapts a model from a labeled source domain to an unlabeled target domain in a one-off way.
In this work, we first theoretically analyze gradual self-training, a popular GDA algorithm, and provide a significantly improved generalization bound.
We propose $textbfG$enerative Gradual D$textbfO$main $textbfA$daptation with Optimal $textbfT$ransport (GOAT)
arXiv Detail & Related papers (2023-10-20T23:02:08Z) - Make the U in UDA Matter: Invariant Consistency Learning for
Unsupervised Domain Adaptation [86.61336696914447]
We dub our approach "Invariant CONsistency learning" (ICON)
We propose to make the U in Unsupervised DA matter by giving equal status to the two domains.
ICON achieves the state-of-the-art performance on the classic UDA benchmarks: Office-Home and VisDA-2017, and outperforms all the conventional methods on the challenging WILDS 2.0 benchmark.
arXiv Detail & Related papers (2023-09-22T09:43:32Z) - Unsupervised Domain Adaptation for Anatomical Landmark Detection [5.070344284426738]
We propose a novel framework for anatomical landmark detection under the setting of unsupervised domain adaptation (UDA)
The framework leverages self-training and domain adversarial learning to address the domain gap during adaptation.
Our experiments on cephalometric and lung landmark detection show the effectiveness of the method, which reduces the domain gap by a large margin and outperforms other UDA methods consistently.
arXiv Detail & Related papers (2023-08-25T10:22:13Z) - Domain Adaptive Person Search [20.442648584402917]
We present Domain Adaptive Person Search (DAPS), which aims to generalize the model from a labeled source domain to the unlabeled target domain.
We show that our framework achieves 34.7% in mAP and 80.6% in top-1 on PRW dataset.
arXiv Detail & Related papers (2022-07-25T04:02:39Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Unsupervised Domain Adaptive 3D Detection with Multi-Level Consistency [90.71745178767203]
Deep learning-based 3D object detection has achieved unprecedented success with the advent of large-scale autonomous driving datasets.
Existing 3D domain adaptive detection methods often assume prior access to the target domain annotations, which is rarely feasible in the real world.
We study a more realistic setting, unsupervised 3D domain adaptive detection, which only utilizes source domain annotations.
arXiv Detail & Related papers (2021-07-23T17:19:23Z) - Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining
and Consistency [93.89773386634717]
Visual domain adaptation involves learning to classify images from a target visual domain using labels available in a different source domain.
We show that in the presence of a few target labels, simple techniques like self-supervision (via rotation prediction) and consistency regularization can be effective without any adversarial alignment to learn a good target classifier.
Our Pretraining and Consistency (PAC) approach, can achieve state of the art accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets.
arXiv Detail & Related papers (2021-01-29T18:40:17Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.