Stochastic Security: Adversarial Defense Using Long-Run Dynamics of
Energy-Based Models
- URL: http://arxiv.org/abs/2005.13525v2
- Date: Thu, 18 Mar 2021 19:39:26 GMT
- Title: Stochastic Security: Adversarial Defense Using Long-Run Dynamics of
Energy-Based Models
- Authors: Mitch Hill, Jonathan Mitchell, Song-Chun Zhu
- Abstract summary: The vulnerability of deep networks to adversarial attacks is a central problem for deep learning from the perspective of both cognition and security.
We focus on defending naturally-trained classifiers using Markov Chain Monte Carlo (MCMC) sampling with an Energy-Based Model (EBM) for adversarial purification.
Our contributions are 1) an improved method for training EBM's with realistic long-run MCMC samples, 2) Expectation-Over-Transformation (EOT) defense that resolves theoretical ambiguities for defenses, and 3) state-of-the-art adversarial defense for naturally-trained classifiers and competitive defense.
- Score: 82.03536496686763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The vulnerability of deep networks to adversarial attacks is a central
problem for deep learning from the perspective of both cognition and security.
The current most successful defense method is to train a classifier using
adversarial images created during learning. Another defense approach involves
transformation or purification of the original input to remove adversarial
signals before the image is classified. We focus on defending naturally-trained
classifiers using Markov Chain Monte Carlo (MCMC) sampling with an Energy-Based
Model (EBM) for adversarial purification. In contrast to adversarial training,
our approach is intended to secure pre-existing and highly vulnerable
classifiers.
The memoryless behavior of long-run MCMC sampling will eventually remove
adversarial signals, while metastable behavior preserves consistent appearance
of MCMC samples after many steps to allow accurate long-run prediction.
Balancing these factors can lead to effective purification and robust
classification. We evaluate adversarial defense with an EBM using the strongest
known attacks against purification. Our contributions are 1) an improved method
for training EBM's with realistic long-run MCMC samples, 2) an
Expectation-Over-Transformation (EOT) defense that resolves theoretical
ambiguities for stochastic defenses and from which the EOT attack naturally
follows, and 3) state-of-the-art adversarial defense for naturally-trained
classifiers and competitive defense compared to adversarially-trained
classifiers on Cifar-10, SVHN, and Cifar-100. Code and pre-trained models are
available at https://github.com/point0bar1/ebm-defense.
Related papers
- Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Adversarial Training on Purification (AToP): Advancing Both Robustness and Generalization [29.09894840783714]
We propose a novel pipeline to acquire the robust purifier model, named Adversarial Training on Purification (AToP)
To evaluate our method in an efficient and scalable way, we conduct extensive experiments on CIFAR-10, CIFAR-100, and ImageNette.
arXiv Detail & Related papers (2024-01-29T17:56:42Z) - How Robust Are Energy-Based Models Trained With Equilibrium Propagation? [4.374837991804085]
Adrial training is the current state-of-the-art defense against adversarial attacks.
It lowers the model's accuracy on clean inputs, is computationally expensive, and offers less robustness to natural noise.
In contrast, energy-based models (EBMs) incorporate feedback connections from each layer to the previous layer, yielding a recurrent, deep-attractor architecture.
arXiv Detail & Related papers (2024-01-21T16:55:40Z) - Carefully Blending Adversarial Training and Purification Improves Adversarial Robustness [1.2289361708127877]
CARSO is able to defend itself against adaptive end-to-end white-box attacks devised for defences.
Our method improves by a significant margin the state-of-the-art for CIFAR-10, CIFAR-100, and TinyImageNet-200.
arXiv Detail & Related papers (2023-05-25T09:04:31Z) - MIXPGD: Hybrid Adversarial Training for Speech Recognition Systems [18.01556863687433]
We propose mixPGD adversarial training method to improve robustness of the model for ASR systems.
In standard adversarial training, adversarial samples are generated by leveraging supervised or unsupervised methods.
We merge the capabilities of both supervised and unsupervised approaches in our method to generate new adversarial samples which aid in improving model robustness.
arXiv Detail & Related papers (2023-03-10T07:52:28Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.