Gradient Regularized Contrastive Learning for Continual Domain
Adaptation
- URL: http://arxiv.org/abs/2103.12294v1
- Date: Tue, 23 Mar 2021 04:10:42 GMT
- Title: Gradient Regularized Contrastive Learning for Continual Domain
Adaptation
- Authors: Shixiang Tang, Peng Su, Dapeng Chen and Wanli Ouyang
- Abstract summary: We study the problem of continual domain adaptation, where the model is presented with a labelled source domain and a sequence of unlabelled target domains.
We propose Gradient Regularized Contrastive Learning (GRCL) to solve the obstacles.
Experiments on Digits, DomainNet and Office-Caltech benchmarks demonstrate the strong performance of our approach.
- Score: 86.02012896014095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human beings can quickly adapt to environmental changes by leveraging
learning experience. However, adapting deep neural networks to dynamic
environments by machine learning algorithms remains a challenge. To better
understand this issue, we study the problem of continual domain adaptation,
where the model is presented with a labelled source domain and a sequence of
unlabelled target domains. The obstacles in this problem are both domain shift
and catastrophic forgetting. We propose Gradient Regularized Contrastive
Learning (GRCL) to solve the obstacles. At the core of our method, gradient
regularization plays two key roles: (1) enforcing the gradient not to harm the
discriminative ability of source features which can, in turn, benefit the
adaptation ability of the model to target domains; (2) constraining the
gradient not to increase the classification loss on old target domains, which
enables the model to preserve the performance on old target domains when
adapting to an in-coming target domain. Experiments on Digits, DomainNet and
Office-Caltech benchmarks demonstrate the strong performance of our approach
when compared to the other state-of-the-art methods.
Related papers
- PiPa++: Towards Unification of Domain Adaptive Semantic Segmentation via Self-supervised Learning [34.786268652516355]
Unsupervised domain adaptive segmentation aims to improve the segmentation accuracy of models on target domains without relying on labeled data from those domains.
It seeks to align the feature representations of the source domain (where labeled data is available) and the target domain (where only unlabeled data is present)
arXiv Detail & Related papers (2024-07-24T08:53:29Z) - Progressive Conservative Adaptation for Evolving Target Domains [76.9274842289221]
Conventional domain adaptation typically transfers knowledge from a source domain to a stationary target domain.
Restoring and adapting to such target data results in escalating computational and resource consumption over time.
We propose a simple yet effective approach, termed progressive conservative adaptation (PCAda)
arXiv Detail & Related papers (2024-02-07T04:11:25Z) - CDA: Contrastive-adversarial Domain Adaptation [11.354043674822451]
We propose a two-stage model for domain adaptation called textbfContrastive-adversarial textbfDomain textbfAdaptation textbf(CDA).
While the adversarial component facilitates domain-level alignment, two-stage contrastive learning exploits class information to achieve higher intra-class compactness across domains.
arXiv Detail & Related papers (2023-01-10T07:43:21Z) - Adversarial Bi-Regressor Network for Domain Adaptive Regression [52.5168835502987]
It is essential to learn a cross-domain regressor to mitigate the domain shift.
This paper proposes a novel method Adversarial Bi-Regressor Network (ABRNet) to seek more effective cross-domain regression model.
arXiv Detail & Related papers (2022-09-20T18:38:28Z) - Domain Adaptation for Object Detection using SE Adaptors and Center Loss [0.0]
We introduce an unsupervised domain adaptation method built on the foundation of faster-RCNN to prevent drops in performance due to domain shift.
We also introduce a family of adaptation layers that leverage the squeeze excitation mechanism called SE Adaptors to improve domain attention.
Finally, we incorporate a center loss in the instance and image level representations to improve the intra-class variance.
arXiv Detail & Related papers (2022-05-25T17:18:31Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Domain Adaptation for Semantic Segmentation via Patch-Wise Contrastive
Learning [62.7588467386166]
We leverage contrastive learning to bridge the domain gap by aligning the features of structurally similar label patches across domains.
Our approach consistently outperforms state-of-the-art unsupervised and semi-supervised methods on two challenging domain adaptive segmentation tasks.
arXiv Detail & Related papers (2021-04-22T13:39:12Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Gradient Regularized Contrastive Learning for Continual Domain
Adaptation [26.21464286134764]
We study the problem of continual domain adaptation, where the model is presented with a labeled source domain and a sequence of unlabeled target domains.
In this work, we propose Gradient Regularized Contrastive Learning to solve the above obstacles.
Our method can jointly learn both semantically discriminative and domain-invariant features with labeled source domain and unlabeled target domains.
arXiv Detail & Related papers (2020-07-25T14:30:03Z) - Unsupervised Domain Adaptive Object Detection using Forward-Backward
Cyclic Adaptation [13.163271874039191]
We present a novel approach to perform the unsupervised domain adaptation for object detection through forward-backward cyclic (FBC) training.
Recent adversarial training based domain adaptation methods have shown their effectiveness on minimizing domain discrepancy via marginal feature distributions alignment.
We propose Forward-Backward Cyclic Adaptation, which iteratively computes adaptation from source to target via backward hopping and from target to source via forward passing.
arXiv Detail & Related papers (2020-02-03T06:24:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.