Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
DNN Accelerators
- URL: http://arxiv.org/abs/2104.08323v1
- Date: Fri, 16 Apr 2021 19:11:14 GMT
- Title: Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
DNN Accelerators
- Authors: David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele
- Abstract summary: We show that a combination of robust fixed-point quantization, weight clipping, as well as random bit error training (RandBET) improves robustness against random or adversarial bit errors in quantized DNN weights significantly.
This leads to high energy savings for low-voltage operation as well as low-precision quantization, but also improves security of DNN accelerators.
- Score: 105.60654479548356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural network (DNN) accelerators received considerable attention in
recent years due to the potential to save energy compared to mainstream
hardware. Low-voltage operation of DNN accelerators allows to further reduce
energy consumption significantly, however, causes bit-level failures in the
memory storing the quantized DNN weights. Furthermore, DNN accelerators have
been shown to be vulnerable to adversarial attacks on voltage controllers or
individual bits. In this paper, we show that a combination of robust
fixed-point quantization, weight clipping, as well as random bit error training
(RandBET) or adversarial bit error training (AdvBET) improves robustness
against random or adversarial bit errors in quantized DNN weights
significantly. This leads not only to high energy savings for low-voltage
operation as well as low-precision quantization, but also improves security of
DNN accelerators. Our approach generalizes across operating voltages and
accelerators, as demonstrated on bit errors from profiled SRAM arrays, and
achieves robustness against both targeted and untargeted bit-level attacks.
Without losing more than 0.8%/2% in test accuracy, we can reduce energy
consumption on CIFAR10 by 20%/30% for 8/4-bit quantization using RandBET.
Allowing up to 320 adversarial bit errors, AdvBET reduces test error from above
90% (chance level) to 26.22% on CIFAR10.
Related papers
- NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural
Network Inference in Low-Voltage Regimes [52.51014498593644]
Deep neural networks (DNNs) have become ubiquitous in machine learning, but their energy consumption remains a notable issue.
We introduce NeuralFuse, a novel add-on module that addresses the accuracy-energy tradeoff in low-voltage regimes.
At a 1% bit error rate, NeuralFuse can reduce memory access energy by up to 24% while recovering accuracy by up to 57%.
arXiv Detail & Related papers (2023-06-29T11:38:22Z) - Quantized Neural Networks for Low-Precision Accumulation with Guaranteed
Overflow Avoidance [68.8204255655161]
We introduce a quantization-aware training algorithm that guarantees avoiding numerical overflow when reducing the precision of accumulators during inference.
We evaluate our algorithm across multiple quantized models that we train for different tasks, showing that our approach can reduce the precision of accumulators while maintaining model accuracy with respect to a floating-point baseline.
arXiv Detail & Related papers (2023-01-31T02:46:57Z) - FlatENN: Train Flat for Enhanced Fault Tolerance of Quantized Deep
Neural Networks [0.03807314298073299]
We investigate the impact of bit-flip and stuck-at faults on activation-sparse quantized DNNs (QDNNs)
We show that a high level of activation sparsity comes at the cost of larger vulnerability to faults.
We propose the mitigation of the impact of faults by employing a sharpness-aware quantization scheme.
arXiv Detail & Related papers (2022-12-29T06:06:14Z) - SoftSNN: Low-Cost Fault Tolerance for Spiking Neural Network
Accelerators under Soft Errors [15.115813664357436]
SoftSNN is a novel methodology to mitigate soft errors in the weight registers (synapses) and neurons of SNN accelerators without re-execution.
For a 900-neuron network with even a high fault rate, our SoftSNN maintains the accuracy degradation below 3%, while reducing latency and energy by up to 3x and 2.3x respectively.
arXiv Detail & Related papers (2022-03-10T18:20:28Z) - Exploring Fault-Energy Trade-offs in Approximate DNN Hardware
Accelerators [2.9649783577150837]
We present an extensive layer-wise and bit-wise fault resilience and energy analysis of different AxDNNs.
Our results demonstrate that the fault resilience in AxDNNs is to the energy efficiency.
arXiv Detail & Related papers (2021-01-08T05:52:12Z) - SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and
Training [82.35376405568975]
Deep neural networks (DNNs) come with heavy parameterization, leading to external dynamic random-access memory (DRAM) for storage.
We present SmartDeal (SD), an algorithm framework to trade higher-cost memory storage/access for lower-cost computation.
We show that SD leads to 10.56x and 4.48x reduction in the storage and training energy, with negligible accuracy loss compared to state-of-the-art training baselines.
arXiv Detail & Related papers (2021-01-04T18:54:07Z) - FracTrain: Fractionally Squeezing Bit Savings Both Temporally and
Spatially for Efficient DNN Training [81.85361544720885]
We propose FracTrain that integrates progressive fractional quantization which gradually increases the precision of activations, weights, and gradients.
FracTrain reduces computational cost and hardware-quantified energy/latency of DNN training while achieving a comparable or better (-0.12%+1.87%) accuracy.
arXiv Detail & Related papers (2020-12-24T05:24:10Z) - Bit Error Robustness for Energy-Efficient DNN Accelerators [93.58572811484022]
We show that a combination of robust fixed-point quantization, weight clipping, and random bit error training (RandBET) improves robustness against random bit errors.
This leads to high energy savings from both low-voltage operation as well as low-precision quantization.
arXiv Detail & Related papers (2020-06-24T18:23:10Z) - Towards Explainable Bit Error Tolerance of Resistive RAM-Based Binarized
Neural Networks [7.349786872131006]
Non-volatile memory, such as resistive RAM (RRAM), is an emerging energy-efficient storage.
Binary neural networks (BNNs) can tolerate a certain percentage of errors without a loss in accuracy.
The bit error tolerance (BET) in BNNs can be achieved by flipping the weight signs during training.
arXiv Detail & Related papers (2020-02-03T17:38:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.