Boosting Randomized Smoothing with Variance Reduced Classifiers
- URL: http://arxiv.org/abs/2106.06946v1
- Date: Sun, 13 Jun 2021 08:40:27 GMT
- Title: Boosting Randomized Smoothing with Variance Reduced Classifiers
- Authors: Mikl\'os Z. Horv\'ath, Mark Niklas M\"uller, Marc Fischer, Martin
Vechev
- Abstract summary: We motivate why ensembles are a particularly suitable choice as base models for Randomized Smoothing (RS)
We empirically confirm this choice, obtaining state of the art results in multiple settings.
- Score: 4.110108749051657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Randomized Smoothing (RS) is a promising method for obtaining robustness
certificates by evaluating a base model under noise. In this work we: (i)
theoretically motivate why ensembles are a particularly suitable choice as base
models for RS, and (ii) empirically confirm this choice, obtaining state of the
art results in multiple settings. The key insight of our work is that the
reduced variance of ensembles over the perturbations introduced in RS leads to
significantly more consistent classifications for a given input, in turn
leading to substantially increased certifiable radii for difficult samples. We
also introduce key optimizations which enable an up to 50-fold decrease in
sample complexity of RS, thus drastically reducing its computational overhead.
Experimentally, we show that ensembles of only 3 to 10 classifiers consistently
improve on the strongest single model with respect to their average certified
radius (ACR) by 5% to 21% on both CIFAR-10 and ImageNet. On the latter, we
achieve a state-of-the-art ACR of 1.11. We release all code and models required
to reproduce our results upon publication.
Related papers
- GHOST: Gaussian Hypothesis Open-Set Technique [10.426399605773083]
Evaluations of large-scale recognition methods typically focus on overall performance.
addressing fairness in Open-Set Recognition (OSR), we demonstrate that per-class performance can vary dramatically.
We apply Z-score normalization to logits to mitigate the impact of feature magnitudes that deviate from the model's expectations.
arXiv Detail & Related papers (2025-02-05T16:56:14Z) - Robust Fine-tuning of Zero-shot Models via Variance Reduction [56.360865951192324]
When fine-tuning zero-shot models, our desideratum is for the fine-tuned model to excel in both in-distribution (ID) and out-of-distribution (OOD)
We propose a sample-wise ensembling technique that can simultaneously attain the best ID and OOD accuracy without the trade-offs.
arXiv Detail & Related papers (2024-11-11T13:13:39Z) - Average Certified Radius is a Poor Metric for Randomized Smoothing [7.960121888896864]
We show that the average certified radius (ACR) is a poor metric for evaluating robustness guarantees provided by randomized smoothing.
We propose strategies, including explicitly discarding hard samples, reweighting the dataset with approximate certified radius, and extreme optimization for easy samples, to achieve state-of-the-art ACR.
arXiv Detail & Related papers (2024-10-09T13:58:41Z) - Covariance-corrected Whitening Alleviates Network Degeneration on Imbalanced Classification [6.197116272789107]
Class imbalance is a critical issue in image classification that significantly affects the performance of deep recognition models.
We propose a novel framework called Whitening-Net to mitigate the degenerate solutions.
In scenarios with extreme class imbalance, the batch covariance statistic exhibits significant fluctuations, impeding the convergence of the whitening operation.
arXiv Detail & Related papers (2024-08-30T10:49:33Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - (Certified!!) Adversarial Robustness for Free! [116.6052628829344]
We certify 71% accuracy on ImageNet under adversarial perturbations constrained to be within a 2-norm of 0.5.
We obtain these results using only pretrained diffusion models and image classifiers, without requiring any fine tuning or retraining of model parameters.
arXiv Detail & Related papers (2022-06-21T17:27:27Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Insta-RS: Instance-wise Randomized Smoothing for Improved Robustness and
Accuracy [9.50143683501477]
Insta-RS is a multiple-start search algorithm that assigns customized Gaussian variances to test examples.
Insta-RS Train is a novel two-stage training algorithm that adaptively adjusts and customizes the noise level of each training example.
We show that our method significantly enhances the average certified radius (ACR) as well as the clean data accuracy.
arXiv Detail & Related papers (2021-03-07T19:46:07Z) - Data Dependent Randomized Smoothing [127.34833801660233]
We show that our data dependent framework can be seamlessly incorporated into 3 randomized smoothing approaches.
We get 9% and 6% improvement over the certified accuracy of the strongest baseline for a radius of 0.5 on CIFAR10 and ImageNet.
arXiv Detail & Related papers (2020-12-08T10:53:11Z) - Iterative Averaging in the Quest for Best Test Error [22.987387623516614]
We analyse and explain the increased generalisation performance of iterate averaging using a Gaussian process perturbation model.
We derive three phenomena latestEdits from our theoretical results.
We showcase the efficacy of our approach on the CIFAR-10/100, ImageNet and Penn Treebank datasets.
arXiv Detail & Related papers (2020-03-02T23:27:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.