Improving Robustness against Real-World and Worst-Case Distribution
Shifts through Decision Region Quantification
- URL: http://arxiv.org/abs/2205.09619v1
- Date: Thu, 19 May 2022 15:25:55 GMT
- Title: Improving Robustness against Real-World and Worst-Case Distribution
Shifts through Decision Region Quantification
- Authors: Leo Schwinn and Leon Bungert and An Nguyen and Ren\'e Raab and Falk
Pulsmeyer and Doina Precup and Bj\"orn Eskofier and Dario Zanca
- Abstract summary: We propose the Decision Region Quantification (DRQ) algorithm to improve the robustness of any differentiable pre-trained model.
DRQ analyzes the robustness of local decision regions in the vicinity of a given data point to make more reliable predictions.
An extensive empirical evaluation shows that DRQ increases the robustness of adversarially and non-adversarially trained models against real-world and worst-case distribution shifts.
- Score: 34.52826326208197
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The reliability of neural networks is essential for their use in
safety-critical applications. Existing approaches generally aim at improving
the robustness of neural networks to either real-world distribution shifts
(e.g., common corruptions and perturbations, spatial transformations, and
natural adversarial examples) or worst-case distribution shifts (e.g.,
optimized adversarial examples). In this work, we propose the Decision Region
Quantification (DRQ) algorithm to improve the robustness of any differentiable
pre-trained model against both real-world and worst-case distribution shifts in
the data. DRQ analyzes the robustness of local decision regions in the vicinity
of a given data point to make more reliable predictions. We theoretically
motivate the DRQ algorithm by showing that it effectively smooths spurious
local extrema in the decision surface. Furthermore, we propose an
implementation using targeted and untargeted adversarial attacks. An extensive
empirical evaluation shows that DRQ increases the robustness of adversarially
and non-adversarially trained models against real-world and worst-case
distribution shifts on several computer vision benchmark datasets.
Related papers
- Distributionally Robust Domain Adaptation [12.02023514105999]
Domain Adaptation (DA) has recently received significant attention due to its potential to adapt a learning model across source and target domains with mismatched distributions.
In this paper, we propose DRDA, a distributionally robust domain adaptation method.
arXiv Detail & Related papers (2022-10-30T17:29:22Z) - Distributed Distributionally Robust Optimization with Non-Convex
Objectives [24.64654924173679]
Asynchronous distributed algorithm named Asynchronous Single-looP alternatIve gRadient projEction is proposed.
New uncertainty set, i.e., constrained D-norm uncertainty set, is developed to leverage the prior distribution and flexibly control the degree of robustness.
empirical studies on real-world datasets demonstrate that the proposed method can not only achieve fast convergence, but also remain robust against data as well as malicious attacks.
arXiv Detail & Related papers (2022-10-14T07:39:13Z) - Global-Local Regularization Via Distributional Robustness [26.983769514262736]
Deep neural networks are often vulnerable to adversarial examples and distribution shifts.
Recent approaches leverage distributional robustness optimization (DRO) to find the most challenging distribution.
We propose a novel regularization technique, following the veins of Wasserstein-based DRO framework.
arXiv Detail & Related papers (2022-03-01T15:36:12Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Towards Trustworthy Predictions from Deep Neural Networks with Fast
Adversarial Calibration [2.8935588665357077]
We propose an efficient yet general modelling approach for obtaining well-calibrated, trustworthy probabilities for samples obtained after a domain shift.
We introduce a new training strategy combining an entropy-encouraging loss term with an adversarial calibration loss term and demonstrate that this results in well-calibrated and technically trustworthy predictions.
arXiv Detail & Related papers (2020-12-20T13:39:29Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Global Distance-distributions Separation for Unsupervised Person
Re-identification [93.39253443415392]
Existing unsupervised ReID approaches often fail in correctly identifying the positive samples and negative samples through the distance-based matching/ranking.
We introduce a global distance-distributions separation constraint over the two distributions to encourage the clear separation of positive and negative samples from a global view.
We show that our method leads to significant improvement over the baselines and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-06-01T07:05:39Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.