Gradient Regularized Contrastive Learning for Continual Domain
Adaptation
- URL: http://arxiv.org/abs/2007.12942v1
- Date: Sat, 25 Jul 2020 14:30:03 GMT
- Title: Gradient Regularized Contrastive Learning for Continual Domain
Adaptation
- Authors: Peng Su, Shixiang Tang, Peng Gao, Di Qiu, Ni Zhao, Xiaogang Wang
- Abstract summary: We study the problem of continual domain adaptation, where the model is presented with a labeled source domain and a sequence of unlabeled target domains.
In this work, we propose Gradient Regularized Contrastive Learning to solve the above obstacles.
Our method can jointly learn both semantically discriminative and domain-invariant features with labeled source domain and unlabeled target domains.
- Score: 26.21464286134764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human beings can quickly adapt to environmental changes by leveraging
learning experience. However, the poor ability of adapting to dynamic
environments remains a major challenge for AI models. To better understand this
issue, we study the problem of continual domain adaptation, where the model is
presented with a labeled source domain and a sequence of unlabeled target
domains. There are two major obstacles in this problem: domain shifts and
catastrophic forgetting. In this work, we propose Gradient Regularized
Contrastive Learning to solve the above obstacles. At the core of our method,
gradient regularization plays two key roles: (1) enforces the gradient of
contrastive loss not to increase the supervised training loss on the source
domain, which maintains the discriminative power of learned features; (2)
regularizes the gradient update on the new domain not to increase the
classification loss on the old target domains, which enables the model to adapt
to an in-coming target domain while preserving the performance of previously
observed domains. Hence our method can jointly learn both semantically
discriminative and domain-invariant features with labeled source domain and
unlabeled target domains. The experiments on Digits, DomainNet and
Office-Caltech benchmarks demonstrate the strong performance of our approach
when compared to the state-of-the-art.
Related papers
- Adversarial Bi-Regressor Network for Domain Adaptive Regression [52.5168835502987]
It is essential to learn a cross-domain regressor to mitigate the domain shift.
This paper proposes a novel method Adversarial Bi-Regressor Network (ABRNet) to seek more effective cross-domain regression model.
arXiv Detail & Related papers (2022-09-20T18:38:28Z) - Joint Attention-Driven Domain Fusion and Noise-Tolerant Learning for
Multi-Source Domain Adaptation [2.734665397040629]
Multi-source Unsupervised Domain Adaptation transfers knowledge from multiple source domains with labeled data to an unlabeled target domain.
The distribution discrepancy between different domains and the noisy pseudo-labels in the target domain both lead to performance bottlenecks.
We propose an approach that integrates Attention-driven Domain fusion and Noise-Tolerant learning (ADNT) to address the two issues mentioned above.
arXiv Detail & Related papers (2022-08-05T01:08:41Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Domain Adaptation for Semantic Segmentation via Patch-Wise Contrastive
Learning [62.7588467386166]
We leverage contrastive learning to bridge the domain gap by aligning the features of structurally similar label patches across domains.
Our approach consistently outperforms state-of-the-art unsupervised and semi-supervised methods on two challenging domain adaptive segmentation tasks.
arXiv Detail & Related papers (2021-04-22T13:39:12Z) - Gradient Regularized Contrastive Learning for Continual Domain
Adaptation [86.02012896014095]
We study the problem of continual domain adaptation, where the model is presented with a labelled source domain and a sequence of unlabelled target domains.
We propose Gradient Regularized Contrastive Learning (GRCL) to solve the obstacles.
Experiments on Digits, DomainNet and Office-Caltech benchmarks demonstrate the strong performance of our approach.
arXiv Detail & Related papers (2021-03-23T04:10:42Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Contradistinguisher: A Vapnik's Imperative to Unsupervised Domain
Adaptation [7.538482310185133]
We propose a model referred Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way.
We achieve the state-of-the-art on Office-31 and VisDA-2017 datasets in both single-source and multi-source settings.
arXiv Detail & Related papers (2020-05-25T19:54:38Z) - Towards Stable and Comprehensive Domain Alignment: Max-Margin
Domain-Adversarial Training [38.12978698952838]
We propose a novel Max-margin Domain-Adversarial Training (MDAT) by designing an Adversarial Reconstruction Network (ARN)
ARN conducts both feature-level and pixel-level domain alignment without involving extra network structures.
Our approach outperforms other state-of-the-art domain alignment methods.
arXiv Detail & Related papers (2020-03-30T07:48:52Z) - Unsupervised Domain Adaptive Object Detection using Forward-Backward
Cyclic Adaptation [13.163271874039191]
We present a novel approach to perform the unsupervised domain adaptation for object detection through forward-backward cyclic (FBC) training.
Recent adversarial training based domain adaptation methods have shown their effectiveness on minimizing domain discrepancy via marginal feature distributions alignment.
We propose Forward-Backward Cyclic Adaptation, which iteratively computes adaptation from source to target via backward hopping and from target to source via forward passing.
arXiv Detail & Related papers (2020-02-03T06:24:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.