A Closer Look at Accuracy vs. Robustness
- URL: http://arxiv.org/abs/2003.02460v3
- Date: Sun, 12 Jul 2020 19:59:39 GMT
- Title: A Closer Look at Accuracy vs. Robustness
- Authors: Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Ruslan Salakhutdinov,
Kamalika Chaudhuri
- Abstract summary: Current methods for training robust networks lead to a drop in test accuracy.
We show that real image datasets are actually separated.
We conclude that achieving robustness and accuracy in practice may require using methods that impose local Lipschitzness.
- Score: 94.2226357646813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current methods for training robust networks lead to a drop in test accuracy,
which has led prior works to posit that a robustness-accuracy tradeoff may be
inevitable in deep learning. We take a closer look at this phenomenon and first
show that real image datasets are actually separated. With this property in
mind, we then prove that robustness and accuracy should both be achievable for
benchmark datasets through locally Lipschitz functions, and hence, there should
be no inherent tradeoff between robustness and accuracy. Through extensive
experiments with robustness methods, we argue that the gap between theory and
practice arises from two limitations of current methods: either they fail to
impose local Lipschitzness or they are insufficiently generalized. We explore
combining dropout with robust training methods and obtain better
generalization. We conclude that achieving robustness and accuracy in practice
may require using methods that impose local Lipschitzness and augmenting them
with deep learning generalization techniques. Code available at
https://github.com/yangarbiter/robust-local-lipschitz
Related papers
- 1-Lipschitz Layers Compared: Memory, Speed, and Certifiable Robustness [22.09354138194545]
robustness of neural networks against input perturbations with bounded magnitude represents a serious concern in the deployment of deep learning models in safety-critical systems.
Recently, the scientific community has focused on enhancing certifiable robustness guarantees by crafting 1-Lipschitz neural networks that leverage Lipschitz bounded dense and convolutional layers.
This paper provides a theoretical and empirical comparison between methods by evaluating them in terms of memory usage, speed, and certifiable robust accuracy.
arXiv Detail & Related papers (2023-11-28T14:50:50Z) - Certified Robustness via Dynamic Margin Maximization and Improved
Lipschitz Regularization [43.98504250013897]
We develop a robust training algorithm to increase the margin in the output (logit) space while regularizing the Lipschitz constant of the model along vulnerable directions.
The relative accuracy of the bounds prevents excessive regularization and allows for more direct manipulation of the decision boundary.
Experiments on the MNIST, CIFAR-10, and Tiny-ImageNet data sets verify that our proposed algorithm obtains competitively improved results compared to the state-of-the-art.
arXiv Detail & Related papers (2023-09-29T20:07:02Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - (Almost) Provable Error Bounds Under Distribution Shift via Disagreement
Discrepancy [8.010528849585937]
We derive an (almost) guaranteed upper bound on the error of deep neural networks under distribution shift using unlabeled test data.
In particular, our bound requires a simple, intuitive condition which is well justified by prior empirical works.
We expect this loss can serve as a drop-in replacement for future methods which require maximizing multiclass disagreement.
arXiv Detail & Related papers (2023-06-01T03:22:15Z) - Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive Smoothing [9.637143119088426]
We show that a robust base classifier's confidence difference for correct and incorrect examples is the key to this improvement.
We adapt an adversarial input detector into a mixing network that adaptively adjusts the mixture of the two base models.
The proposed flexible method, termed "adaptive smoothing", can work in conjunction with existing or even future methods that improve clean accuracy, robustness, or adversary detection.
arXiv Detail & Related papers (2023-01-29T22:05:28Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - Adversarial Robustness under Long-Tailed Distribution [93.50792075460336]
Adversarial robustness has attracted extensive studies recently by revealing the vulnerability and intrinsic characteristics of deep networks.
In this work we investigate the adversarial vulnerability as well as defense under long-tailed distributions.
We propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant and data re-balancing.
arXiv Detail & Related papers (2021-04-06T17:53:08Z) - Learning Accurate Dense Correspondences and When to Trust Them [161.76275845530964]
We aim to estimate a dense flow field relating two images, coupled with a robust pixel-wise confidence map.
We develop a flexible probabilistic approach that jointly learns the flow prediction and its uncertainty.
Our approach obtains state-of-the-art results on challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-01-05T18:54:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.