Weight-Covariance Alignment for Adversarially Robust Neural Networks
- URL: http://arxiv.org/abs/2010.08852v3
- Date: Wed, 26 May 2021 10:16:14 GMT
- Title: Weight-Covariance Alignment for Adversarially Robust Neural Networks
- Authors: Panagiotis Eustratiadis, Henry Gouk, Da Li, Timothy Hospedales
- Abstract summary: We propose a new SNN that achieves state-of-the-art performance without relying on adversarial training.
While existing SNNs inject learned or hand-tuned isotropic noise, our SNN learns an anisotropic noise distribution to optimize a learning-theoretic bound on adversarial robustness.
- Score: 15.11530043291188
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stochastic Neural Networks (SNNs) that inject noise into their hidden layers
have recently been shown to achieve strong robustness against adversarial
attacks. However, existing SNNs are usually heuristically motivated, and often
rely on adversarial training, which is computationally costly. We propose a new
SNN that achieves state-of-the-art performance without relying on adversarial
training, and enjoys solid theoretical justification. Specifically, while
existing SNNs inject learned or hand-tuned isotropic noise, our SNN learns an
anisotropic noise distribution to optimize a learning-theoretic bound on
adversarial robustness. We evaluate our method on a number of popular
benchmarks, show that it can be applied to different architectures, and that it
provides robustness to a variety of white-box and black-box attacks, while
being simple and fast to train compared to existing alternatives.
Related papers
- Adversarially Robust Spiking Neural Networks Through Conversion [16.2319630026996]
Spiking neural networks (SNNs) provide an energy-efficient alternative to a variety of artificial neural network (ANN) based AI applications.
As the progress in neuromorphic computing with SNNs expands their use in applications, the problem of adversarial robustness of SNNs becomes more pronounced.
arXiv Detail & Related papers (2023-11-15T08:33:46Z) - Inherent Redundancy in Spiking Neural Networks [24.114844269113746]
Spiking Networks (SNNs) are a promising energy-efficient alternative to conventional artificial neural networks.
In this work, we focus on three key questions regarding inherent redundancy in SNNs.
We propose an Advance Attention (ASA) module to harness SNNs' redundancy.
arXiv Detail & Related papers (2023-08-16T08:58:25Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Rethinking Feature Uncertainty in Stochastic Neural Networks for
Adversarial Robustness [12.330036598899218]
A randomness technique has been proposed recently, named Neural Networks (SNNs)
MFDV-SNN achieves a significant improvement over existing methods, which indicates that it is a simple but effective method to improve model robustness.
arXiv Detail & Related papers (2022-01-01T08:46:06Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep
Spiking Neural Networks by Training with Crafted Input Noise [13.904091056365765]
We present an SNN training algorithm that uses crafted input noise and incurs no additional training time.
Compared to standard trained direct input SNNs, our trained models yield improved classification accuracy of up to 13.7%.
Our models also outperform inherently robust SNNs trained on rate-coded inputs with improved or similar classification performance on attack-generated images.
arXiv Detail & Related papers (2021-10-06T16:48:48Z) - KATANA: Simple Post-Training Robustness Using Test Time Augmentations [49.28906786793494]
A leading defense against such attacks is adversarial training, a technique in which a DNN is trained to be robust to adversarial attacks.
We propose a new simple and easy-to-use technique, KATANA, for robustifying an existing pretrained DNN without modifying its weights.
Our strategy achieves state-of-the-art adversarial robustness on diverse attacks with minimal compromise on the natural images' classification.
arXiv Detail & Related papers (2021-09-16T19:16:00Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.