QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of
Neural Networks
- URL: http://arxiv.org/abs/2004.11233v2
- Date: Sat, 27 Jun 2020 13:14:58 GMT
- Title: QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of
Neural Networks
- Authors: Priyadarshini Panda
- Abstract summary: QUANOS is a framework that performs layer-specific hybrid quantization based on Adversarial Noise Sensitivity (ANS)
Our experiments on CIFAR10, CIFAR100 datasets show that QUANOS outperforms homogenously quantized 8-bit precision baseline in terms of adversarial robustness.
- Score: 3.2242513084255036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) have been shown to be vulnerable to adversarial
attacks, wherein, a model gets fooled by applying slight perturbations on the
input. With the advent of Internet-of-Things and the necessity to enable
intelligence in embedded devices, low-power and secure hardware implementation
of DNNs is vital. In this paper, we investigate the use of quantization to
potentially resist adversarial attacks. Several recent studies have reported
remarkable results in reducing the energy requirement of a DNN through
quantization. However, no prior work has considered the relationship between
adversarial sensitivity of a DNN and its effect on quantization. We propose
QUANOS- a framework that performs layer-specific hybrid quantization based on
Adversarial Noise Sensitivity (ANS). We identify a novel noise stability metric
(ANS) for DNNs, i.e., the sensitivity of each layer's computation to
adversarial noise. ANS allows for a principled way of determining optimal
bit-width per layer that incurs adversarial robustness as well as
energy-efficiency with minimal loss in accuracy. Essentially, QUANOS assigns
layer significance based on its contribution to adversarial perturbation and
accordingly scales the precision of the layers. A key advantage of QUANOS is
that it does not rely on a pre-trained model and can be applied in the initial
stages of training. We evaluate the benefits of QUANOS on precision scalable
Multiply and Accumulate (MAC) hardware architectures with data gating and
subword parallelism capabilities. Our experiments on CIFAR10, CIFAR100 datasets
show that QUANOS outperforms homogenously quantized 8-bit precision baseline in
terms of adversarial robustness (3%-4% higher) while yielding improved
compression (>5x) and energy savings (>2x) at iso-accuracy.
Related papers
- The Inherent Adversarial Robustness of Analog In-Memory Computing [2.435021773579434]
A key challenge for Deep Neural Network (DNN) algorithms is their vulnerability to adversarial attacks.
In this paper, we experimentally validate a conjecture for the first time on an AIMC chip based on Phase Change Memory (PCM) devices.
Additional robustness is also observed when performing hardware-in-theloop attacks.
arXiv Detail & Related papers (2024-11-11T14:29:59Z) - An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks [13.271286153792058]
Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case.
This paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties.
arXiv Detail & Related papers (2023-07-29T06:27:28Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - The Hardware Impact of Quantization and Pruning for Weights in Spiking
Neural Networks [0.368986335765876]
quantization and pruning of parameters can both compress the model size, reduce memory footprints, and facilitate low-latency execution.
We study various combinations of pruning and quantization in isolation, cumulatively, and simultaneously to a state-of-the-art SNN targeting gesture recognition.
We show that this state-of-the-art model is amenable to aggressive parameter quantization, not suffering from any loss in accuracy down to ternary weights.
arXiv Detail & Related papers (2023-02-08T16:25:20Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Energy Efficient Learning with Low Resolution Stochastic Domain Wall
Synapse Based Deep Neural Networks [0.9176056742068814]
We demonstrate that extremely low resolution quantized (nominally 5-state) synapses with large variations in Domain Wall (DW) position can be both energy efficient and achieve reasonably high testing accuracies.
We show that by implementing suitable modifications to the learning algorithms, we can address the behavior as well as the effect of their low-resolution to achieve high testing accuracies.
arXiv Detail & Related papers (2021-11-14T09:12:29Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection
in Neural Networks [3.125321230840342]
Adversarial examples are inputs that have been carefully perturbed to fool classifier networks, while appearing unchanged to humans.
We propose a structured methodology of augmenting a deep neural network (DNN) with a detector subnetwork.
We show that our method improves state-of-the-art detector robustness against adversarial examples.
arXiv Detail & Related papers (2021-01-05T14:31:53Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.