Towards Better Adversarial Purification via Adversarial Denoising Diffusion Training
- URL: http://arxiv.org/abs/2404.14309v1
- Date: Mon, 22 Apr 2024 16:10:38 GMT
- Title: Towards Better Adversarial Purification via Adversarial Denoising Diffusion Training
- Authors: Yiming Liu, Kezhao Liu, Yao Xiao, Ziyi Dong, Xiaogang Xu, Pengxu Wei, Liang Lin,
- Abstract summary: diffusion-based purification (DBP) has emerged as a promising approach for defending against adversarial attacks.
Previous studies have used questionable methods to evaluate the robustness of DBP models.
We propose Adversarial Denoising Diffusion Training (ADDT) to improve robustness of DBP models.
- Score: 65.10019978876863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, diffusion-based purification (DBP) has emerged as a promising approach for defending against adversarial attacks. However, previous studies have used questionable methods to evaluate the robustness of DBP models, their explanations of DBP robustness also lack experimental support. We re-examine DBP robustness using precise gradient, and discuss the impact of stochasticity on DBP robustness. To better explain DBP robustness, we assess DBP robustness under a novel attack setting, Deterministic White-box, and pinpoint stochasticity as the main factor in DBP robustness. Our results suggest that DBP models rely on stochasticity to evade the most effective attack direction, rather than directly countering adversarial perturbations. To improve the robustness of DBP models, we propose Adversarial Denoising Diffusion Training (ADDT). This technique uses Classifier-Guided Perturbation Optimization (CGPO) to generate adversarial perturbation through guidance from a pre-trained classifier, and uses Rank-Based Gaussian Mapping (RBGM) to convert adversarial pertubation into a normal Gaussian distribution. Empirical results show that ADDT improves the robustness of DBP models. Further experiments confirm that ADDT equips DBP models with the ability to directly counter adversarial perturbations.
Related papers
- IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency [20.61046457594186]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
This paper proposes a simple yet effective input-level backdoor detection (dubbed IBD-PSC) to filter out malicious testing images.
arXiv Detail & Related papers (2024-05-16T03:19:52Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Perturbation-Invariant Adversarial Training for Neural Ranking Models:
Improving the Effectiveness-Robustness Trade-Off [107.35833747750446]
adversarial examples can be crafted by adding imperceptible perturbations to legitimate documents.
This vulnerability raises significant concerns about their reliability and hinders the widespread deployment of NRMs.
In this study, we establish theoretical guarantees regarding the effectiveness-robustness trade-off in NRMs.
arXiv Detail & Related papers (2023-12-16T05:38:39Z) - Enhancing Adversarial Robustness via Score-Based Optimization [22.87882885963586]
Adversarial attacks have the potential to mislead deep neural network classifiers by introducing slight perturbations.
We introduce a novel adversarial defense scheme named ScoreOpt, which optimize adversarial samples at test-time.
Our experimental results demonstrate that our approach outperforms existing adversarial defenses in terms of both performance and robustness speed.
arXiv Detail & Related papers (2023-07-10T03:59:42Z) - Direct Diffusion Bridge using Data Consistency for Inverse Problems [65.04689839117692]
Diffusion model-based inverse problem solvers have shown impressive performance, but are limited in speed.
Several recent works have tried to alleviate this problem by building a diffusion process, directly bridging the clean and the corrupted.
We propose a modified inference procedure that imposes data consistency without the need for fine-tuning.
arXiv Detail & Related papers (2023-05-31T12:51:10Z) - Causal Information Bottleneck Boosts Adversarial Robustness of Deep
Neural Network [3.819052032134146]
The information bottleneck (IB) method is a feasible defense solution against adversarial attacks in deep learning.
We incorporate the causal inference into the IB framework to alleviate such a problem.
Our method exhibits the considerable robustness against multiple adversarial attacks.
arXiv Detail & Related papers (2022-10-25T12:49:36Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - Non-Negative Bregman Divergence Minimization for Deep Direct Density
Ratio Estimation [18.782750537161615]
We propose a non-negative correction for empirical BD estimators to mitigate train-loss hacking.
We show that the proposed methods show a favorable performance in inlier-based outlier detection.
arXiv Detail & Related papers (2020-06-12T07:39:03Z) - BERT Loses Patience: Fast and Robust Inference with Early Exit [91.26199404912019]
We propose Patience-based Early Exit as a plug-and-play technique to improve the efficiency and robustness of a pretrained language model.
Our approach improves inference efficiency as it allows the model to make a prediction with fewer layers.
arXiv Detail & Related papers (2020-06-07T13:38:32Z) - Blind Adversarial Pruning: Balance Accuracy, Efficiency and Robustness [3.039568795810294]
This paper first investigates the robustness of pruned models with different compression ratios under the gradual pruning process.
We then test the performance of mixing the clean data and adversarial examples into the gradual pruning process, called adversarial pruning.
To better balance the AER, we propose an approach called blind adversarial pruning (BAP), which introduces the idea of blind adversarial training into the gradual pruning process.
arXiv Detail & Related papers (2020-04-10T02:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.