DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local
Smoothing
- URL: http://arxiv.org/abs/2308.14333v1
- Date: Mon, 28 Aug 2023 06:22:43 GMT
- Title: DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local
Smoothing
- Authors: Jiawei Zhang, Zhongzhu Chen, Huan Zhang, Chaowei Xiao, Bo Li
- Abstract summary: We propose DiffSmooth, which first performs adversarial purification via diffusion models and then maps the purified instances to a common region via a simple yet effective local smoothing strategy.
For instance, DiffSmooth improves the SOTA-certified accuracy from $36.0%$ to $53.0%$ under $ell$ $1.5$ on ImageNet.
- Score: 39.962024242809136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models have been leveraged to perform adversarial purification and
thus provide both empirical and certified robustness for a standard model. On
the other hand, different robustly trained smoothed models have been studied to
improve the certified robustness. Thus, it raises a natural question: Can
diffusion model be used to achieve improved certified robustness on those
robustly trained smoothed models? In this work, we first theoretically show
that recovered instances by diffusion models are in the bounded neighborhood of
the original instance with high probability; and the "one-shot" denoising
diffusion probabilistic models (DDPM) can approximate the mean of the generated
distribution of a continuous-time diffusion model, which approximates the
original instance under mild conditions. Inspired by our analysis, we propose a
certifiably robust pipeline DiffSmooth, which first performs adversarial
purification via diffusion models and then maps the purified instances to a
common region via a simple yet effective local smoothing strategy. We conduct
extensive experiments on different datasets and show that DiffSmooth achieves
SOTA-certified robustness compared with eight baselines. For instance,
DiffSmooth improves the SOTA-certified accuracy from $36.0\%$ to $53.0\%$ under
$\ell_2$ radius $1.5$ on ImageNet. The code is available at
[https://github.com/javyduck/DiffSmooth].
Related papers
- Informed Correctors for Discrete Diffusion Models [32.87362154118195]
We propose a family of informed correctors that more reliably counteracts discretization error by leveraging information learned by the model.
We also propose $k$-Gillespie's, a sampling algorithm that better utilizes each model evaluation, while still enjoying the speed and flexibility of $tau$-leaping.
Across several real and synthetic datasets, we show that $k$-Gillespie's with informed correctors reliably produces higher quality samples at lower computational cost.
arXiv Detail & Related papers (2024-07-30T23:29:29Z) - Multi-scale Diffusion Denoised Smoothing [79.95360025953931]
randomized smoothing has become one of a few tangible approaches that offers adversarial robustness to models at scale.
We present scalable methods to address the current trade-off between certified robustness and accuracy in denoised smoothing.
Our experiments show that the proposed multi-scale smoothing scheme combined with diffusion fine-tuning enables strong certified robustness available with high noise level.
arXiv Detail & Related papers (2023-10-25T17:11:21Z) - Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative
Models [49.81937966106691]
We develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models.
In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach.
arXiv Detail & Related papers (2023-06-15T16:30:08Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - DensePure: Understanding Diffusion Models towards Adversarial Robustness [110.84015494617528]
We analyze the properties of diffusion models and establish the conditions under which they can enhance certified robustness.
We propose a new method DensePure, designed to improve the certified robustness of a pretrained model (i.e. a classifier)
We show that this robust region is a union of multiple convex sets, and is potentially much larger than the robust regions identified in previous works.
arXiv Detail & Related papers (2022-11-01T08:18:07Z) - On Distillation of Guided Diffusion Models [94.95228078141626]
We propose an approach to distilling classifier-free guided diffusion models into models that are fast to sample from.
For standard diffusion models trained on the pixelspace, our approach is able to generate images visually comparable to that of the original model.
For diffusion models trained on the latent-space (e.g., Stable Diffusion), our approach is able to generate high-fidelity images using as few as 1 to 4 denoising steps.
arXiv Detail & Related papers (2022-10-06T18:03:56Z) - Improved Denoising Diffusion Probabilistic Models [4.919647298882951]
We show that DDPMs can achieve competitive log-likelihoods while maintaining high sample quality.
We also find that learning variances of the reverse diffusion process allows sampling with an order of magnitude fewer forward passes.
We show that the sample quality and likelihood of these models scale smoothly with model capacity and training compute, making them easily scalable.
arXiv Detail & Related papers (2021-02-18T23:44:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.