Adversarially Adaptive Normalization for Single Domain Generalization
- URL: http://arxiv.org/abs/2106.01899v1
- Date: Tue, 1 Jun 2021 23:58:23 GMT
- Title: Adversarially Adaptive Normalization for Single Domain Generalization
- Authors: Xinjie Fan, Qifei Wang, Junjie Ke, Feng Yang, Boqing Gong, Mingyuan
Zhou
- Abstract summary: We propose a generic normalization approach, adaptive standardization and rescaling normalization (ASR-Norm)
ASR-Norm learns both the standardization and rescaling statistics via neural networks.
We show that ASR-Norm can bring consistent improvement to the state-of-the-art ADA approaches.
- Score: 71.80587939738672
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single domain generalization aims to learn a model that performs well on many
unseen domains with only one domain data for training. Existing works focus on
studying the adversarial domain augmentation (ADA) to improve the model's
generalization capability. The impact on domain generalization of the
statistics of normalization layers is still underinvestigated. In this paper,
we propose a generic normalization approach, adaptive standardization and
rescaling normalization (ASR-Norm), to complement the missing part in previous
works. ASR-Norm learns both the standardization and rescaling statistics via
neural networks. This new form of normalization can be viewed as a generic form
of the traditional normalizations. When trained with ADA, the statistics in
ASR-Norm are learned to be adaptive to the data coming from different domains,
and hence improves the model generalization performance across domains,
especially on the target domain with large discrepancy from the source domain.
The experimental results show that ASR-Norm can bring consistent improvement to
the state-of-the-art ADA approaches by 1.6%, 2.7%, and 6.3% averagely on the
Digits, CIFAR-10-C, and PACS benchmarks, respectively. As a generic tool, the
improvement introduced by ASR-Norm is agnostic to the choice of ADA methods.
Related papers
- Domain Generalization via Nuclear Norm Regularization [38.18747924656019]
We propose a simple and effective regularization method based on the nuclear norm of the learned features for domain generalization.
We show nuclear norm regularization achieves strong performance compared to baselines in a wide range of domain generalization tasks.
arXiv Detail & Related papers (2023-03-13T23:30:48Z) - Zero-Shot Anomaly Detection via Batch Normalization [58.291409630995744]
Anomaly detection plays a crucial role in many safety-critical application domains.
The challenge of adapting an anomaly detector to drift in the normal data distribution has led to the development of zero-shot AD techniques.
We propose a simple yet effective method called Adaptive Centered Representations (ACR) for zero-shot batch-level AD.
arXiv Detail & Related papers (2023-02-15T18:34:15Z) - When Neural Networks Fail to Generalize? A Model Sensitivity Perspective [82.36758565781153]
Domain generalization (DG) aims to train a model to perform well in unseen domains under different distributions.
This paper considers a more realistic yet more challenging scenario, namely Single Domain Generalization (Single-DG)
We empirically ascertain a property of a model that correlates strongly with its generalization that we coin as "model sensitivity"
We propose a novel strategy of Spectral Adversarial Data Augmentation (SADA) to generate augmented images targeted at the highly sensitive frequencies.
arXiv Detail & Related papers (2022-12-01T20:15:15Z) - Reciprocal Normalization for Domain Adaptation [31.293016830229313]
Batch normalization (BN) is widely used in modern deep neural networks.
We propose a novel normalization method, Reciprocal Normalization (RN)
RN is more suitable for UDA problems and can be easily integrated into popular domain adaptation methods.
arXiv Detail & Related papers (2021-12-20T12:17:22Z) - Improving Multi-Domain Generalization through Domain Re-labeling [31.636953426159224]
We study the important link between pre-specified domain labels and the generalization performance.
We introduce a general approach for multi-domain generalization, MulDEns, that uses an ERM-based deep ensembling backbone.
We show that MulDEns does not require tailoring the augmentation strategy or the training process specific to a dataset.
arXiv Detail & Related papers (2021-12-17T23:21:50Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z) - Adaptive Domain-Specific Normalization for Generalizable Person
Re-Identification [81.30327016286009]
We propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
In this work, we propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
arXiv Detail & Related papers (2021-05-07T02:54:55Z) - SelfReg: Self-supervised Contrastive Regularization for Domain
Generalization [7.512471799525974]
We propose a new regularization method for domain generalization based on contrastive learning, self-supervised contrastive regularization (SelfReg)
The proposed approach use only positive data pairs, thus it resolves various problems caused by negative pair sampling.
In the recent benchmark, DomainBed, the proposed method shows comparable performance to the conventional state-of-the-art alternatives.
arXiv Detail & Related papers (2021-04-20T09:08:29Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.