Using Synthetic Corruptions to Measure Robustness to Natural
Distribution Shifts
- URL: http://arxiv.org/abs/2107.12052v1
- Date: Mon, 26 Jul 2021 09:20:49 GMT
- Title: Using Synthetic Corruptions to Measure Robustness to Natural
Distribution Shifts
- Authors: Alfred Laugros and Alice Caplier and Matthieu Ospici
- Abstract summary: We propose a methodology to build synthetic corruption benchmarks that make robustness estimations more correlated with robustness to real-world distribution shifts.
Applying the proposed methodology, we build a new benchmark called ImageNet-Syn2Nat to predict image classifier robustness.
- Score: 6.445605125467574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthetic corruptions gathered into a benchmark are frequently used to
measure neural network robustness to distribution shifts. However, robustness
to synthetic corruption benchmarks is not always predictive of robustness to
distribution shifts encountered in real-world applications. In this paper, we
propose a methodology to build synthetic corruption benchmarks that make
robustness estimations more correlated with robustness to real-world
distribution shifts. Using the overlapping criterion, we split synthetic
corruptions into categories that help to better understand neural network
robustness. Based on these categories, we identify three parameters that are
relevant to take into account when constructing a corruption benchmark: number
of represented categories, balance among categories and size of benchmarks.
Applying the proposed methodology, we build a new benchmark called
ImageNet-Syn2Nat to predict image classifier robustness.
Related papers
- Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - Investigating the Corruption Robustness of Image Classifiers with Random
Lp-norm Corruptions [3.1337872355726084]
This study investigates the use of random p-norm corruptions to augment the training and test data of image classifiers.
We find that training data augmentation with a combination of p-norm corruptions significantly improves corruption robustness, even on top of state-of-the-art data augmentation schemes.
arXiv Detail & Related papers (2023-05-09T12:45:43Z) - GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models [60.48306899271866]
We present a new framework, called GREAT Score, for global robustness evaluation of adversarial perturbation using generative models.
We show high correlation and significantly reduced cost of GREAT Score when compared to the attack-based model ranking on RobustBench.
GREAT Score can be used for remote auditing of privacy-sensitive black-box models.
arXiv Detail & Related papers (2023-04-19T14:58:27Z) - A Systematic Evaluation of Node Embedding Robustness [77.29026280120277]
We assess the empirical robustness of node embedding models to random and adversarial poisoning attacks.
We compare edge addition, deletion and rewiring strategies computed using network properties as well as node labels.
We found that node classification suffers from higher performance degradation as opposed to network reconstruction.
arXiv Detail & Related papers (2022-09-16T17:20:23Z) - Utilizing Class Separation Distance for the Evaluation of Corruption
Robustness of Machine Learning Classifiers [0.6882042556551611]
We propose a test data augmentation method that uses a robustness distance $epsilon$ derived from the datasets minimal class separation distance.
The resulting MSCR metric allows a dataset-specific comparison of different classifiers with respect to their corruption robustness.
Our results indicate that robustness training through simple data augmentation can already slightly improve accuracy.
arXiv Detail & Related papers (2022-06-27T15:56:16Z) - Noisy Learning for Neural ODEs Acts as a Robustness Locus Widening [0.802904964931021]
We investigate the problems and challenges of evaluating the robustness of Differential Equation-based (DE) networks against synthetic distribution shifts.
We propose a novel and simple accuracy metric which can be used to evaluate intrinsic robustness and to validate dataset corruption simulators.
arXiv Detail & Related papers (2022-06-16T15:10:38Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Using the Overlapping Score to Improve Corruption Benchmarks [6.445605125467574]
We propose a metric called corruption overlapping score, which can be used to reveal flaws in corruption benchmarks.
We argue that taking into account overlappings between corruptions can help to improve existing benchmarks or build better ones.
arXiv Detail & Related papers (2021-05-26T06:42:54Z) - An Orthogonal Classifier for Improving the Adversarial Robustness of
Neural Networks [21.13588742648554]
Recent efforts have shown that imposing certain modifications on classification layer can improve the robustness of the neural networks.
We explicitly construct a dense orthogonal weight matrix whose entries have the same magnitude, leading to a novel robust classifier.
Our method is efficient and competitive to many state-of-the-art defensive approaches.
arXiv Detail & Related papers (2021-05-19T13:12:14Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - A general framework for defining and optimizing robustness [74.67016173858497]
We propose a rigorous and flexible framework for defining different types of robustness properties for classifiers.
Our concept is based on postulates that robustness of a classifier should be considered as a property that is independent of accuracy.
We develop a very general robustness framework that is applicable to any type of classification model.
arXiv Detail & Related papers (2020-06-19T13:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.