Towards Explainable Bit Error Tolerance of Resistive RAM-Based Binarized
Neural Networks
- URL: http://arxiv.org/abs/2002.00909v1
- Date: Mon, 3 Feb 2020 17:38:45 GMT
- Title: Towards Explainable Bit Error Tolerance of Resistive RAM-Based Binarized
Neural Networks
- Authors: Sebastian Buschj\"ager, Jian-Jia Chen, Kuan-Hsun Chen, Mario G\"unzel,
Christian Hakert, Katharina Morik, Rodion Novkin, Lukas Pfahler, Mikail Yayla
- Abstract summary: Non-volatile memory, such as resistive RAM (RRAM), is an emerging energy-efficient storage.
Binary neural networks (BNNs) can tolerate a certain percentage of errors without a loss in accuracy.
The bit error tolerance (BET) in BNNs can be achieved by flipping the weight signs during training.
- Score: 7.349786872131006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-volatile memory, such as resistive RAM (RRAM), is an emerging
energy-efficient storage, especially for low-power machine learning models on
the edge. It is reported, however, that the bit error rate of RRAMs can be up
to 3.3% in the ultra low-power setting, which might be crucial for many use
cases. Binary neural networks (BNNs), a resource efficient variant of neural
networks (NNs), can tolerate a certain percentage of errors without a loss in
accuracy and demand lower resources in computation and storage. The bit error
tolerance (BET) in BNNs can be achieved by flipping the weight signs during
training, as proposed by Hirtzlin et al., but their method has a significant
drawback, especially for fully connected neural networks (FCNN): The FCNNs
overfit to the error rate used in training, which leads to low accuracy under
lower error rates. In addition, the underlying principles of BET are not
investigated. In this work, we improve the training for BET of BNNs and aim to
explain this property. We propose straight-through gradient approximation to
improve the weight-sign-flip training, by which BNNs adapt less to the bit
error rates. To explain the achieved robustness, we define a metric that aims
to measure BET without fault injection. We evaluate the metric and find that it
correlates with accuracy over error rate for all FCNNs tested. Finally, we
explore the influence of a novel regularizer that optimizes with respect to
this metric, with the aim of providing a configurable trade-off in accuracy and
BET.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.