Insta-RS: Instance-wise Randomized Smoothing for Improved Robustness and
Accuracy
- URL: http://arxiv.org/abs/2103.04436v1
- Date: Sun, 7 Mar 2021 19:46:07 GMT
- Title: Insta-RS: Instance-wise Randomized Smoothing for Improved Robustness and
Accuracy
- Authors: Chen Chen, Kezhi Kong, Peihong Yu, Juan Luque, Furong Huang
- Abstract summary: Insta-RS is a multiple-start search algorithm that assigns customized Gaussian variances to test examples.
Insta-RS Train is a novel two-stage training algorithm that adaptively adjusts and customizes the noise level of each training example.
We show that our method significantly enhances the average certified radius (ACR) as well as the clean data accuracy.
- Score: 9.50143683501477
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Randomized smoothing (RS) is an effective and scalable technique for
constructing neural network classifiers that are certifiably robust to
adversarial perturbations. Most RS works focus on training a good base model
that boosts the certified robustness of the smoothed model. However, existing
RS techniques treat every data point the same, i.e., the variance of the
Gaussian noise used to form the smoothed model is preset and universal for all
training and test data. This preset and universal Gaussian noise variance is
suboptimal since different data points have different margins and the local
properties of the base model vary across the input examples. In this paper, we
examine the impact of customized handling of examples and propose Instance-wise
Randomized Smoothing (Insta-RS) -- a multiple-start search algorithm that
assigns customized Gaussian variances to test examples. We also design Insta-RS
Train -- a novel two-stage training algorithm that adaptively adjusts and
customizes the noise level of each training example for training a base model
that boosts the certified robustness of the instance-wise Gaussian smoothed
model. Through extensive experiments on CIFAR-10 and ImageNet, we show that our
method significantly enhances the average certified radius (ACR) as well as the
clean data accuracy compared to existing state-of-the-art provably robust
classifiers.
Related papers
- Foster Adaptivity and Balance in Learning with Noisy Labels [26.309508654960354]
We propose a novel approach named textbfSED to deal with label noise in a textbfSelf-adaptivtextbfE and class-balancetextbfD manner.
A mean-teacher model is then employed to correct labels of noisy samples.
We additionally propose a self-adaptive and class-balanced sample re-weighting mechanism to assign different weights to detected noisy samples.
arXiv Detail & Related papers (2024-07-03T03:10:24Z) - Importance of Disjoint Sampling in Conventional and Transformer Models for Hyperspectral Image Classification [2.1223532600703385]
This paper presents an innovative disjoint sampling approach for training SOTA models on Hyperspectral image classification (HSIC) tasks.
By separating training, validation, and test data without overlap, the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.
This rigorous methodology is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.
arXiv Detail & Related papers (2024-04-23T11:40:52Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Neural Priming for Sample-Efficient Adaptation [92.14357804106787]
We propose Neural Priming, a technique for adapting large pretrained models to distribution shifts and downstream tasks.
Neural Priming can be performed at test time, even for pretraining as large as LAION-2B.
arXiv Detail & Related papers (2023-06-16T21:53:16Z) - Decoupled Training for Long-Tailed Classification With Stochastic
Representations [15.990318581975435]
Decoupling representation learning and learning has been shown to be effective in classification with long-tailed data.
We first apply Weight Averaging (SWA), an optimization technique for improving generalization of deep neural networks, to obtain better generalizing feature extractors for long-tailed classification.
We then propose a novel classifier re-training algorithm based on perturbed representation obtained from the SWA-Gaussian, a Gaussian SWA, and a self-distillation strategy.
arXiv Detail & Related papers (2023-04-19T05:35:09Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Boosting Randomized Smoothing with Variance Reduced Classifiers [4.110108749051657]
We motivate why ensembles are a particularly suitable choice as base models for Randomized Smoothing (RS)
We empirically confirm this choice, obtaining state of the art results in multiple settings.
arXiv Detail & Related papers (2021-06-13T08:40:27Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Data Dependent Randomized Smoothing [127.34833801660233]
We show that our data dependent framework can be seamlessly incorporated into 3 randomized smoothing approaches.
We get 9% and 6% improvement over the certified accuracy of the strongest baseline for a radius of 0.5 on CIFAR10 and ImageNet.
arXiv Detail & Related papers (2020-12-08T10:53:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.