How Robust Are Energy-Based Models Trained With Equilibrium Propagation?
- URL: http://arxiv.org/abs/2401.11543v1
- Date: Sun, 21 Jan 2024 16:55:40 GMT
- Title: How Robust Are Energy-Based Models Trained With Equilibrium Propagation?
- Authors: Siddharth Mansingh, Michal Kucer, Garrett Kenyon, Juston Moore and
Michael Teti
- Abstract summary: Adrial training is the current state-of-the-art defense against adversarial attacks.
It lowers the model's accuracy on clean inputs, is computationally expensive, and offers less robustness to natural noise.
In contrast, energy-based models (EBMs) incorporate feedback connections from each layer to the previous layer, yielding a recurrent, deep-attractor architecture.
- Score: 4.374837991804085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are easily fooled by adversarial perturbations
that are imperceptible to humans. Adversarial training, a process where
adversarial examples are added to the training set, is the current
state-of-the-art defense against adversarial attacks, but it lowers the model's
accuracy on clean inputs, is computationally expensive, and offers less
robustness to natural noise. In contrast, energy-based models (EBMs), which
were designed for efficient implementation in neuromorphic hardware and
physical systems, incorporate feedback connections from each layer to the
previous layer, yielding a recurrent, deep-attractor architecture which we
hypothesize should make them naturally robust. Our work is the first to explore
the robustness of EBMs to both natural corruptions and adversarial attacks,
which we do using the CIFAR-10 and CIFAR-100 datasets. We demonstrate that EBMs
are more robust than transformers and display comparable robustness to
adversarially-trained DNNs on gradient-based (white-box) attacks, query-based
(black-box) attacks, and natural perturbations without sacrificing clean
accuracy, and without the need for adversarial training or additional training
techniques.
Related papers
- Robust Diffusion Models for Adversarial Purification [28.313494459818497]
Diffusion models (DMs) based adversarial purification (AP) has shown to be the most powerful alternative to adversarial training (AT)
We propose a novel robust reverse process with adversarial guidance, which is independent of given pre-trained DMs.
This robust guidance can not only ensure to generate purified examples retaining more semantic content but also mitigate the accuracy-robustness trade-off of DMs.
arXiv Detail & Related papers (2024-03-24T08:34:08Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Self-Ensemble Adversarial Training for Improved Robustness [14.244311026737666]
Adversarial training is the strongest strategy against various adversarial attacks among all sorts of defense methods.
Recent works mainly focus on developing new loss functions or regularizers, attempting to find the unique optimal point in the weight space.
We devise a simple but powerful emphSelf-Ensemble Adversarial Training (SEAT) method for yielding a robust classifier by averaging weights of history models.
arXiv Detail & Related papers (2022-03-18T01:12:18Z) - Robustness-via-Synthesis: Robust Training with Generative Adversarial
Perturbations [10.140147080535224]
Adversarial training with first-order attacks has been one of the most effective defenses against adversarial perturbations to this day.
This study presents a robust training algorithm where the adversarial perturbations are automatically synthesized from a random vector using a generator network.
Experimental results show that the proposed approach attains comparable robustness with various gradient-based and generative robust training techniques.
arXiv Detail & Related papers (2021-08-22T13:15:24Z) - ROPUST: Improving Robustness through Fine-tuning with Photonic
Processors and Synthetic Gradients [65.52888259961803]
We introduce ROPUST, a simple and efficient method to leverage robust pre-trained models and increase their robustness.
We test our method on nine different models against four attacks in RobustBench, consistently improving over state-of-the-art performance.
We show that even with state-of-the-art phase retrieval techniques, ROPUST remains an effective defense.
arXiv Detail & Related papers (2021-07-06T12:03:36Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Combating Adversaries with Anti-Adversaries [118.70141983415445]
In particular, our layer generates an input perturbation in the opposite direction of the adversarial one.
We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models.
Our anti-adversary layer significantly enhances model robustness while coming at no cost on clean accuracy.
arXiv Detail & Related papers (2021-03-26T09:36:59Z) - Self-Progressing Robust Training [146.8337017922058]
Current robust training methods such as adversarial training explicitly uses an "attack" to generate adversarial examples.
We propose a new framework called SPROUT, self-progressing robust training.
Our results shed new light on scalable, effective and attack-independent robust training methods.
arXiv Detail & Related papers (2020-12-22T00:45:24Z) - Stochastic Security: Adversarial Defense Using Long-Run Dynamics of
Energy-Based Models [82.03536496686763]
The vulnerability of deep networks to adversarial attacks is a central problem for deep learning from the perspective of both cognition and security.
We focus on defending naturally-trained classifiers using Markov Chain Monte Carlo (MCMC) sampling with an Energy-Based Model (EBM) for adversarial purification.
Our contributions are 1) an improved method for training EBM's with realistic long-run MCMC samples, 2) Expectation-Over-Transformation (EOT) defense that resolves theoretical ambiguities for defenses, and 3) state-of-the-art adversarial defense for naturally-trained classifiers and competitive defense.
arXiv Detail & Related papers (2020-05-27T17:53:36Z) - Improving the affordability of robustness training for DNNs [11.971637253035107]
We show that the initial phase of adversarial training is redundant and can be replaced with natural training which significantly improves the computational efficiency.
We show that our proposed method can reduce the training time by a factor of up to 2.5 with comparable or better model test accuracy and generalization on various strengths of adversarial attacks.
arXiv Detail & Related papers (2020-02-11T07:29:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.