Domain Generalization via Nuclear Norm Regularization
- URL: http://arxiv.org/abs/2303.07527v2
- Date: Mon, 4 Dec 2023 19:57:48 GMT
- Title: Domain Generalization via Nuclear Norm Regularization
- Authors: Zhenmei Shi, Yifei Ming, Ying Fan, Frederic Sala, Yingyu Liang
- Abstract summary: We propose a simple and effective regularization method based on the nuclear norm of the learned features for domain generalization.
We show nuclear norm regularization achieves strong performance compared to baselines in a wide range of domain generalization tasks.
- Score: 38.18747924656019
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to generalize to unseen domains is crucial for machine learning
systems deployed in the real world, especially when we only have data from
limited training domains. In this paper, we propose a simple and effective
regularization method based on the nuclear norm of the learned features for
domain generalization. Intuitively, the proposed regularizer mitigates the
impacts of environmental features and encourages learning domain-invariant
features. Theoretically, we provide insights into why nuclear norm
regularization is more effective compared to ERM and alternative regularization
methods. Empirically, we conduct extensive experiments on both synthetic and
real datasets. We show nuclear norm regularization achieves strong performance
compared to baselines in a wide range of domain generalization tasks. Moreover,
our regularizer is broadly applicable with various methods such as ERM and SWAD
with consistently improved performance, e.g., 1.7% and 0.9% test accuracy
improvements respectively on the DomainBed benchmark.
Related papers
- FEED: Fairness-Enhanced Meta-Learning for Domain Generalization [13.757379847454372]
Generalizing to out-of-distribution data while aware of model fairness is a significant and challenging problem in meta-learning.
This paper introduces an approach to fairness-aware meta-learning that significantly enhances domain generalization capabilities.
arXiv Detail & Related papers (2024-11-02T17:34:33Z) - Efficiently Assemble Normalization Layers and Regularization for Federated Domain Generalization [1.1534313664323637]
Domain shift is a formidable issue in Machine Learning that causes a model to suffer from performance degradation when tested on unseen domains.
FedDG attempts to train a global model using collaborative clients in a privacy-preserving manner that can generalize well to unseen clients possibly with domain shift.
Here, we introduce a novel architectural method for FedDG, namely gPerXAN, which relies on a normalization scheme working with a guiding regularizer.
arXiv Detail & Related papers (2024-03-22T20:22:08Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - Normalization Perturbation: A Simple Domain Generalization Method for
Real-World Domain Shifts [133.99270341855728]
Real-world domain styles can vary substantially due to environment changes and sensor noises.
Deep models only know the training domain style.
We propose Normalization Perturbation to overcome this domain style overfitting problem.
arXiv Detail & Related papers (2022-11-08T17:36:49Z) - Improving Multi-Domain Generalization through Domain Re-labeling [31.636953426159224]
We study the important link between pre-specified domain labels and the generalization performance.
We introduce a general approach for multi-domain generalization, MulDEns, that uses an ERM-based deep ensembling backbone.
We show that MulDEns does not require tailoring the augmentation strategy or the training process specific to a dataset.
arXiv Detail & Related papers (2021-12-17T23:21:50Z) - Unsupervised Domain Generalization for Person Re-identification: A
Domain-specific Adaptive Framework [50.88463458896428]
Domain generalization (DG) has attracted much attention in person re-identification (ReID) recently.
Existing methods usually need the source domains to be labeled, which could be a significant burden for practical ReID tasks.
We propose a simple and efficient domain-specific adaptive framework, and realize it with an adaptive normalization module.
arXiv Detail & Related papers (2021-11-30T02:35:51Z) - Variational Disentanglement for Domain Generalization [68.85458536180437]
We propose to tackle the problem of domain generalization by delivering an effective framework named Variational Disentanglement Network (VDN)
VDN is capable of disentangling the domain-specific features and task-specific features, where the task-specific features are expected to be better generalized to unseen but related test data.
arXiv Detail & Related papers (2021-09-13T09:55:32Z) - Adversarially Adaptive Normalization for Single Domain Generalization [71.80587939738672]
We propose a generic normalization approach, adaptive standardization and rescaling normalization (ASR-Norm)
ASR-Norm learns both the standardization and rescaling statistics via neural networks.
We show that ASR-Norm can bring consistent improvement to the state-of-the-art ADA approaches.
arXiv Detail & Related papers (2021-06-01T23:58:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.