SelfReg: Self-supervised Contrastive Regularization for Domain
Generalization
- URL: http://arxiv.org/abs/2104.09841v1
- Date: Tue, 20 Apr 2021 09:08:29 GMT
- Title: SelfReg: Self-supervised Contrastive Regularization for Domain
Generalization
- Authors: Daehee Kim, Seunghyun Park, Jinkyu Kim, and Jaekoo Lee
- Abstract summary: We propose a new regularization method for domain generalization based on contrastive learning, self-supervised contrastive regularization (SelfReg)
The proposed approach use only positive data pairs, thus it resolves various problems caused by negative pair sampling.
In the recent benchmark, DomainBed, the proposed method shows comparable performance to the conventional state-of-the-art alternatives.
- Score: 7.512471799525974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In general, an experimental environment for deep learning assumes that the
training and the test dataset are sampled from the same distribution. However,
in real-world situations, a difference in the distribution between two
datasets, domain shift, may occur, which becomes a major factor impeding the
generalization performance of the model. The research field to solve this
problem is called domain generalization, and it alleviates the domain shift
problem by extracting domain-invariant features explicitly or implicitly. In
recent studies, contrastive learning-based domain generalization approaches
have been proposed and achieved high performance. These approaches require
sampling of the negative data pair. However, the performance of contrastive
learning fundamentally depends on quality and quantity of negative data pairs.
To address this issue, we propose a new regularization method for domain
generalization based on contrastive learning, self-supervised contrastive
regularization (SelfReg). The proposed approach use only positive data pairs,
thus it resolves various problems caused by negative pair sampling. Moreover,
we propose a class-specific domain perturbation layer (CDPL), which makes it
possible to effectively apply mixup augmentation even when only positive data
pairs are used. The experimental results show that the techniques incorporated
by SelfReg contributed to the performance in a compatible manner. In the recent
benchmark, DomainBed, the proposed method shows comparable performance to the
conventional state-of-the-art alternatives. Codes are available at
https://github.com/dnap512/SelfReg.
Related papers
- NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Domain-aware Triplet loss in Domain Generalization [0.0]
Domain shift is caused by discrepancies in the distributions of the testing and training data.
We design a domainaware triplet loss for domain generalization to help the model to cluster similar semantic features.
Our algorithm is designed to disperse domain information in the embedding space.
arXiv Detail & Related papers (2023-03-01T14:02:01Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Improving Out-of-Distribution Robustness via Selective Augmentation [61.147630193060856]
Machine learning algorithms assume that training and test examples are drawn from the same distribution.
distribution shift is a common problem in real-world applications and can cause models to perform dramatically worse at test time.
We propose a mixup-based technique which learns invariant functions via selective augmentation called LISA.
arXiv Detail & Related papers (2022-01-02T05:58:33Z) - Domain Generalization via Domain-based Covariance Minimization [4.414778226415752]
We propose a novel variance measurement for multiple domains so as to minimize the difference between conditional distributions across domains.
We show that for small-scale datasets, we are able to achieve better quantitative results indicating better generalization performance over unseen test datasets.
arXiv Detail & Related papers (2021-10-12T19:30:15Z) - Mixup Regularized Adversarial Networks for Multi-Domain Text
Classification [16.229317527580072]
Using the shared-private paradigm and adversarial training has significantly improved the performances of multi-domain text classification (MDTC) models.
However, there are two issues for the existing methods.
We propose a mixup regularized adversarial network (MRAN) to address these two issues.
arXiv Detail & Related papers (2021-01-31T15:24:05Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.