Order-preserving Consistency Regularization for Domain Adaptation and
Generalization
- URL: http://arxiv.org/abs/2309.13258v1
- Date: Sat, 23 Sep 2023 04:45:42 GMT
- Title: Order-preserving Consistency Regularization for Domain Adaptation and
Generalization
- Authors: Mengmeng Jing, Xiantong Zhen, Jingjing Li, Cees Snoek
- Abstract summary: Deep learning models fail on cross-domain challenges if the model is oversensitive to domain-specific attributes.
We propose the Order-preserving Consistency Regularization (OCR) for cross-domain tasks.
- Score: 45.64969000499267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models fail on cross-domain challenges if the model is
oversensitive to domain-specific attributes, e.g., lightning, background,
camera angle, etc. To alleviate this problem, data augmentation coupled with
consistency regularization are commonly adopted to make the model less
sensitive to domain-specific attributes. Consistency regularization enforces
the model to output the same representation or prediction for two views of one
image. These constraints, however, are either too strict or not
order-preserving for the classification probabilities. In this work, we propose
the Order-preserving Consistency Regularization (OCR) for cross-domain tasks.
The order-preserving property for the prediction makes the model robust to
task-irrelevant transformations. As a result, the model becomes less sensitive
to the domain-specific attributes. The comprehensive experiments show that our
method achieves clear advantages on five different cross-domain tasks.
Related papers
- SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Cross-Domain Ensemble Distillation for Domain Generalization [17.575016642108253]
We propose a simple yet effective method for domain generalization, named cross-domain ensemble distillation (XDED)
Our method generates an ensemble of the output logits from training data with the same label but from different domains and then penalizes each output for the mismatch with the ensemble.
We show that models learned by our method are robust against adversarial attacks and image corruptions.
arXiv Detail & Related papers (2022-11-25T12:32:36Z) - A Domain Gap Aware Generative Adversarial Network for Multi-domain Image
Translation [22.47113158859034]
The paper proposes a unified model to translate images across multiple domains with significant domain gaps.
With a single unified generator, the model can maintain consistency over the global shapes as well as the local texture information across multiple domains.
arXiv Detail & Related papers (2021-10-21T00:33:06Z) - Heuristic Domain Adaptation [105.59792285047536]
Heuristic Domain Adaptation Network (HDAN) explicitly learns the domain-invariant and domain-specific representations.
Heuristic Domain Adaptation Network (HDAN) has exceeded state-of-the-art on unsupervised DA, multi-source DA and semi-supervised DA.
arXiv Detail & Related papers (2020-11-30T04:21:35Z) - Domain Adversarial Fine-Tuning as an Effective Regularizer [80.14528207465412]
In Natural Language Processing (NLP), pretrained language models (LMs) that are transferred to downstream tasks have been recently shown to achieve state-of-the-art results.
Standard fine-tuning can degrade the general-domain representations captured during pretraining.
We introduce a new regularization technique, AFTER; domain Adversarial Fine-Tuning as an Effective Regularizer.
arXiv Detail & Related papers (2020-09-28T14:35:06Z) - Learning from Scale-Invariant Examples for Domain Adaptation in Semantic
Segmentation [6.320141734801679]
We propose a novel approach of exploiting scale-invariance property of semantic segmentation model for self-supervised domain adaptation.
Our algorithm is based on a reasonable assumption that, in general, regardless of the size of the object and stuff (given context) the semantic labeling should be unchanged.
We show that this constraint is violated over the images of the target domain, and hence could be used to transfer labels in-between differently scaled patches.
arXiv Detail & Related papers (2020-07-28T19:40:45Z) - Self-Guided Adaptation: Progressive Representation Alignment for Domain
Adaptive Object Detection [86.69077525494106]
Unsupervised domain adaptation (UDA) has achieved unprecedented success in improving the cross-domain robustness of object detection models.
Existing UDA methods largely ignore the instantaneous data distribution during model learning, which could deteriorate the feature representation given large domain shift.
We propose a Self-Guided Adaptation (SGA) model, target at aligning feature representation and transferring object detection models across domains.
arXiv Detail & Related papers (2020-03-19T13:30:45Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.