Multi-scale Diffusion Denoised Smoothing
- URL: http://arxiv.org/abs/2310.16779v3
- Date: Fri, 27 Oct 2023 17:51:17 GMT
- Title: Multi-scale Diffusion Denoised Smoothing
- Authors: Jongheon Jeong, Jinwoo Shin
- Abstract summary: randomized smoothing has become one of a few tangible approaches that offers adversarial robustness to models at scale.
We present scalable methods to address the current trade-off between certified robustness and accuracy in denoised smoothing.
Our experiments show that the proposed multi-scale smoothing scheme combined with diffusion fine-tuning enables strong certified robustness available with high noise level.
- Score: 79.95360025953931
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Along with recent diffusion models, randomized smoothing has become one of a
few tangible approaches that offers adversarial robustness to models at scale,
e.g., those of large pre-trained models. Specifically, one can perform
randomized smoothing on any classifier via a simple "denoise-and-classify"
pipeline, so-called denoised smoothing, given that an accurate denoiser is
available - such as diffusion model. In this paper, we present scalable methods
to address the current trade-off between certified robustness and accuracy in
denoised smoothing. Our key idea is to "selectively" apply smoothing among
multiple noise scales, coined multi-scale smoothing, which can be efficiently
implemented with a single diffusion model. This approach also suggests a new
objective to compare the collective robustness of multi-scale smoothed
classifiers, and questions which representation of diffusion model would
maximize the objective. To address this, we propose to further fine-tune
diffusion model (a) to perform consistent denoising whenever the original image
is recoverable, but (b) to generate rather diverse outputs otherwise. Our
experiments show that the proposed multi-scale smoothing scheme combined with
diffusion fine-tuning enables strong certified robustness available with high
noise level while maintaining its accuracy close to non-smoothed classifiers.
Related papers
- Training-free Diffusion Model Alignment with Sampling Demons [15.400553977713914]
We propose an optimization approach, dubbed Demon, to guide the denoising process at inference time without backpropagation through reward functions or model retraining.
Our approach works by controlling noise distribution in denoising steps to concentrate density on regions corresponding to high rewards through optimization.
To the best of our knowledge, the proposed approach is the first inference-time, backpropagation-free preference alignment method for diffusion models.
arXiv Detail & Related papers (2024-10-08T07:33:49Z) - DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local
Smoothing [39.962024242809136]
We propose DiffSmooth, which first performs adversarial purification via diffusion models and then maps the purified instances to a common region via a simple yet effective local smoothing strategy.
For instance, DiffSmooth improves the SOTA-certified accuracy from $36.0%$ to $53.0%$ under $ell$ $1.5$ on ImageNet.
arXiv Detail & Related papers (2023-08-28T06:22:43Z) - Towards Better Certified Segmentation via Diffusion Models [62.21617614504225]
segmentation models can be vulnerable to adversarial perturbations, which hinders their use in critical-decision systems like healthcare or autonomous driving.
Recently, randomized smoothing has been proposed to certify segmentation predictions by adding Gaussian noise to the input to obtain theoretical guarantees.
In this paper, we address the problem of certifying segmentation prediction using a combination of randomized smoothing and diffusion models.
arXiv Detail & Related papers (2023-06-16T16:30:39Z) - DensePure: Understanding Diffusion Models towards Adversarial Robustness [110.84015494617528]
We analyze the properties of diffusion models and establish the conditions under which they can enhance certified robustness.
We propose a new method DensePure, designed to improve the certified robustness of a pretrained model (i.e. a classifier)
We show that this robust region is a union of multiple convex sets, and is potentially much larger than the robust regions identified in previous works.
arXiv Detail & Related papers (2022-11-01T08:18:07Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.