Global-Local Regularization Via Distributional Robustness
- URL: http://arxiv.org/abs/2203.00553v1
- Date: Tue, 1 Mar 2022 15:36:12 GMT
- Title: Global-Local Regularization Via Distributional Robustness
- Authors: Hoang Phan, Trung Le, Trung Phung, Tuan Anh Bui, Nhat Ho and Dinh
Phung
- Abstract summary: Deep neural networks are often vulnerable to adversarial examples and distribution shifts.
Recent approaches leverage distributional robustness optimization (DRO) to find the most challenging distribution.
We propose a novel regularization technique, following the veins of Wasserstein-based DRO framework.
- Score: 26.983769514262736
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite superior performance in many situations, deep neural networks are
often vulnerable to adversarial examples and distribution shifts, limiting
model generalization ability in real-world applications. To alleviate these
problems, recent approaches leverage distributional robustness optimization
(DRO) to find the most challenging distribution, and then minimize loss
function over this most challenging distribution. Regardless of achieving some
improvements, these DRO approaches have some obvious limitations. First, they
purely focus on local regularization to strengthen model robustness, missing a
global regularization effect which is useful in many real-world applications
(e.g., domain adaptation, domain generalization, and adversarial machine
learning). Second, the loss functions in the existing DRO approaches operate in
only the most challenging distribution, hence decouple with the original
distribution, leading to a restrictive modeling capability. In this paper, we
propose a novel regularization technique, following the veins of
Wasserstein-based DRO framework. Specifically, we define a particular joint
distribution and Wasserstein-based uncertainty, allowing us to couple the
original and most challenging distributions for enhancing modeling capability
and applying both local and global regularizations. Empirical studies on
different learning problems demonstrate that our proposed approach
significantly outperforms the existing regularization approaches in various
domains: semi-supervised learning, domain adaptation, domain generalization,
and adversarial machine learning.
Related papers
- DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Diffusion generative models can consider a broader range of solutions and exhibit stronger generalization by learning parameters.
We propose a new framework, which leverages intrinsic distribution learning of diffusion generative models to learn high-quality solutions.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Non-stationary Domain Generalization: Theory and Algorithm [11.781050299571692]
In this paper, we study domain generalization in non-stationary environment.
We first examine the impact of environmental non-stationarity on model performance.
Then, we propose a novel algorithm based on adaptive invariant representation learning.
arXiv Detail & Related papers (2024-05-10T21:32:43Z) - Normalization Perturbation: A Simple Domain Generalization Method for
Real-World Domain Shifts [133.99270341855728]
Real-world domain styles can vary substantially due to environment changes and sensor noises.
Deep models only know the training domain style.
We propose Normalization Perturbation to overcome this domain style overfitting problem.
arXiv Detail & Related papers (2022-11-08T17:36:49Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge [22.285156929279207]
Domain generalization aims at learning a universal model that performs well on unseen target domains.
We propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG)
arXiv Detail & Related papers (2022-07-11T14:46:50Z) - Feature-Distribution Perturbation and Calibration for Generalized Person
ReID [47.84576229286398]
Person Re-identification (ReID) has been advanced remarkably over the last 10 years along with the rapid development of deep learning for visual recognition.
We propose a Feature-Distribution Perturbation and generalization (PECA) method to derive generic feature representations for person ReID.
arXiv Detail & Related papers (2022-05-23T11:06:12Z) - Improving Robustness against Real-World and Worst-Case Distribution
Shifts through Decision Region Quantification [34.52826326208197]
We propose the Decision Region Quantification (DRQ) algorithm to improve the robustness of any differentiable pre-trained model.
DRQ analyzes the robustness of local decision regions in the vicinity of a given data point to make more reliable predictions.
An extensive empirical evaluation shows that DRQ increases the robustness of adversarially and non-adversarially trained models against real-world and worst-case distribution shifts.
arXiv Detail & Related papers (2022-05-19T15:25:55Z) - Forward Super-Resolution: How Can GANs Learn Hierarchical Generative
Models for Real-World Distributions [66.05472746340142]
Generative networks (GAN) are among the most successful for learning high-complexity, real-world distributions.
In this paper we show how GANs can efficiently learn to the distribution of real-life images.
arXiv Detail & Related papers (2021-06-04T17:33:29Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.