Boosting Randomized Smoothing with Variance Reduced Classifiers
- URL: http://arxiv.org/abs/2106.06946v1
- Date: Sun, 13 Jun 2021 08:40:27 GMT
- Title: Boosting Randomized Smoothing with Variance Reduced Classifiers
- Authors: Mikl\'os Z. Horv\'ath, Mark Niklas M\"uller, Marc Fischer, Martin
Vechev
- Abstract summary: We motivate why ensembles are a particularly suitable choice as base models for Randomized Smoothing (RS)
We empirically confirm this choice, obtaining state of the art results in multiple settings.
- Score: 4.110108749051657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Randomized Smoothing (RS) is a promising method for obtaining robustness
certificates by evaluating a base model under noise. In this work we: (i)
theoretically motivate why ensembles are a particularly suitable choice as base
models for RS, and (ii) empirically confirm this choice, obtaining state of the
art results in multiple settings. The key insight of our work is that the
reduced variance of ensembles over the perturbations introduced in RS leads to
significantly more consistent classifications for a given input, in turn
leading to substantially increased certifiable radii for difficult samples. We
also introduce key optimizations which enable an up to 50-fold decrease in
sample complexity of RS, thus drastically reducing its computational overhead.
Experimentally, we show that ensembles of only 3 to 10 classifiers consistently
improve on the strongest single model with respect to their average certified
radius (ACR) by 5% to 21% on both CIFAR-10 and ImageNet. On the latter, we
achieve a state-of-the-art ACR of 1.11. We release all code and models required
to reproduce our results upon publication.
Related papers
- Covariance-corrected Whitening Alleviates Network Degeneration on Imbalanced Classification [6.197116272789107]
Class imbalance is a critical issue in image classification that significantly affects the performance of deep recognition models.
We propose a novel framework called Whitening-Net to mitigate the degenerate solutions.
In scenarios with extreme class imbalance, the batch covariance statistic exhibits significant fluctuations, impeding the convergence of the whitening operation.
arXiv Detail & Related papers (2024-08-30T10:49:33Z) - Implicit Grid Convolution for Multi-Scale Image Super-Resolution [6.8410780175245165]
We propose a framework for training multiple integer scales simultaneously with a single model.
We use a single encoder to extract features and introduce a novel upsampler, Implicit Grid Convolution(IGConv)
Our experiments demonstrate that training multiple scales with a single model reduces the training budget and stored parameters by one-third.
arXiv Detail & Related papers (2024-08-19T03:30:15Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - (Certified!!) Adversarial Robustness for Free! [116.6052628829344]
We certify 71% accuracy on ImageNet under adversarial perturbations constrained to be within a 2-norm of 0.5.
We obtain these results using only pretrained diffusion models and image classifiers, without requiring any fine tuning or retraining of model parameters.
arXiv Detail & Related papers (2022-06-21T17:27:27Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - IB-GAN: A Unified Approach for Multivariate Time Series Classification
under Class Imbalance [1.854931308524932]
Non-parametric data augmentation with Generative Adversarial Networks (GANs) offers a promising solution.
We propose Imputation Balanced GAN (IB-GAN), a novel method that joins data augmentation and classification in a one-step process via an imputation-balancing approach.
arXiv Detail & Related papers (2021-10-14T15:31:16Z) - Open-Set Recognition: A Good Closed-Set Classifier is All You Need [146.6814176602689]
We show that the ability of a classifier to make the 'none-of-above' decision is highly correlated with its accuracy on the closed-set classes.
We use this correlation to boost the performance of the cross-entropy OSR 'baseline' by improving its closed-set accuracy.
We also construct new benchmarks which better respect the task of detecting semantic novelty.
arXiv Detail & Related papers (2021-10-12T17:58:59Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Insta-RS: Instance-wise Randomized Smoothing for Improved Robustness and
Accuracy [9.50143683501477]
Insta-RS is a multiple-start search algorithm that assigns customized Gaussian variances to test examples.
Insta-RS Train is a novel two-stage training algorithm that adaptively adjusts and customizes the noise level of each training example.
We show that our method significantly enhances the average certified radius (ACR) as well as the clean data accuracy.
arXiv Detail & Related papers (2021-03-07T19:46:07Z) - Data Dependent Randomized Smoothing [127.34833801660233]
We show that our data dependent framework can be seamlessly incorporated into 3 randomized smoothing approaches.
We get 9% and 6% improvement over the certified accuracy of the strongest baseline for a radius of 0.5 on CIFAR10 and ImageNet.
arXiv Detail & Related papers (2020-12-08T10:53:11Z) - Iterative Averaging in the Quest for Best Test Error [22.987387623516614]
We analyse and explain the increased generalisation performance of iterate averaging using a Gaussian process perturbation model.
We derive three phenomena latestEdits from our theoretical results.
We showcase the efficacy of our approach on the CIFAR-10/100, ImageNet and Penn Treebank datasets.
arXiv Detail & Related papers (2020-03-02T23:27:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.