SoftSNN: Low-Cost Fault Tolerance for Spiking Neural Network
Accelerators under Soft Errors
- URL: http://arxiv.org/abs/2203.05523v2
- Date: Sat, 12 Mar 2022 01:51:06 GMT
- Title: SoftSNN: Low-Cost Fault Tolerance for Spiking Neural Network
Accelerators under Soft Errors
- Authors: Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad
Shafique
- Abstract summary: SoftSNN is a novel methodology to mitigate soft errors in the weight registers (synapses) and neurons of SNN accelerators without re-execution.
For a 900-neuron network with even a high fault rate, our SoftSNN maintains the accuracy degradation below 3%, while reducing latency and energy by up to 3x and 2.3x respectively.
- Score: 15.115813664357436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Specialized hardware accelerators have been designed and employed to maximize
the performance efficiency of Spiking Neural Networks (SNNs). However, such
accelerators are vulnerable to transient faults (i.e., soft errors), which
occur due to high-energy particle strikes, and manifest as bit flips at the
hardware layer. These errors can change the weight values and neuron operations
in the compute engine of SNN accelerators, thereby leading to incorrect outputs
and accuracy degradation. However, the impact of soft errors in the compute
engine and the respective mitigation techniques have not been thoroughly
studied yet for SNNs. A potential solution is employing redundant executions
(re-execution) for ensuring correct outputs, but it leads to huge latency and
energy overheads. Toward this, we propose SoftSNN, a novel methodology to
mitigate soft errors in the weight registers (synapses) and neurons of SNN
accelerators without re-execution, thereby maintaining the accuracy with low
latency and energy overheads. Our SoftSNN methodology employs the following key
steps: (1) analyzing the SNN characteristics under soft errors to identify
faulty weights and neuron operations, which are required for recognizing faulty
SNN behavior; (2) a Bound-and-Protect technique that leverages this analysis to
improve the SNN fault tolerance by bounding the weight values and protecting
the neurons from faulty operations; and (3) devising lightweight hardware
enhancements for the neural hardware accelerator to efficiently support the
proposed technique. The experimental results show that, for a 900-neuron
network with even a high fault rate, our SoftSNN maintains the accuracy
degradation below 3%, while reducing latency and energy by up to 3x and 2.3x
respectively, as compared to the re-execution technique.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.