Multi-Objective Optimization for Size and Resilience of Spiking Neural
Networks
- URL: http://arxiv.org/abs/2002.01406v1
- Date: Tue, 4 Feb 2020 16:58:25 GMT
- Title: Multi-Objective Optimization for Size and Resilience of Spiking Neural
Networks
- Authors: Mihaela Dimovska, Travis Johnston, Catherine D. Schuman, J. Parker
Mitchell, Thomas E. Potok
- Abstract summary: Neuromorphic computing architectures model Spiking Neural Networks (SNNs) in silicon.
We study Spiking Neural Networks in two neuromorphic architecture implementations with the goal of decreasing their size.
We propose a multiobjective fitness function to optimize the size and resiliency of the SNN.
- Score: 0.9449650062296823
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by the connectivity mechanisms in the brain, neuromorphic computing
architectures model Spiking Neural Networks (SNNs) in silicon. As such,
neuromorphic architectures are designed and developed with the goal of having
small, low power chips that can perform control and machine learning tasks.
However, the power consumption of the developed hardware can greatly depend on
the size of the network that is being evaluated on the chip. Furthermore, the
accuracy of a trained SNN that is evaluated on chip can change due to voltage
and current variations in the hardware that perturb the learned weights of the
network. While efforts are made on the hardware side to minimize those
perturbations, a software based strategy to make the deployed networks more
resilient can help further alleviate that issue. In this work, we study Spiking
Neural Networks in two neuromorphic architecture implementations with the goal
of decreasing their size, while at the same time increasing their resiliency to
hardware faults. We leverage an evolutionary algorithm to train the SNNs and
propose a multiobjective fitness function to optimize the size and resiliency
of the SNN. We demonstrate that this strategy leads to well-performing,
small-sized networks that are more resilient to hardware faults.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Sparsity-Aware Hardware-Software Co-Design of Spiking Neural Networks: An Overview [1.0499611180329804]
Spiking Neural Networks (SNNs) are inspired by the sparse and event-driven nature of biological neural processing, and offer the potential for ultra-low-power artificial intelligence.
We explore the hardware-software co-design of sparse SNNs, examining how sparsity representation, hardware architectures, and training techniques influence hardware efficiency.
Our work aims to illuminate the path towards embedded neuromorphic systems that fully exploit the computational advantages of sparse SNNs.
arXiv Detail & Related papers (2024-08-26T17:22:11Z) - Q-SNNs: Quantized Spiking Neural Networks [12.719590949933105]
Spiking Neural Networks (SNNs) leverage sparse spikes to represent information and process them in an event-driven manner.
We introduce a lightweight and hardware-friendly Quantized SNN that applies quantization to both synaptic weights and membrane potentials.
We present a new Weight-Spike Dual Regulation (WS-DR) method inspired by information entropy theory.
arXiv Detail & Related papers (2024-06-19T16:23:26Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - The Hardware Impact of Quantization and Pruning for Weights in Spiking
Neural Networks [0.368986335765876]
quantization and pruning of parameters can both compress the model size, reduce memory footprints, and facilitate low-latency execution.
We study various combinations of pruning and quantization in isolation, cumulatively, and simultaneously to a state-of-the-art SNN targeting gesture recognition.
We show that this state-of-the-art model is amenable to aggressive parameter quantization, not suffering from any loss in accuracy down to ternary weights.
arXiv Detail & Related papers (2023-02-08T16:25:20Z) - A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding [6.047137174639418]
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
arXiv Detail & Related papers (2022-06-06T10:56:25Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - Online Training of Spiking Recurrent Neural Networks with Phase-Change
Memory Synapses [1.9809266426888898]
Training spiking neural networks (RNNs) on dedicated neuromorphic hardware is still an open challenge.
We present a simulation framework of differential-architecture arrays based on an accurate and comprehensive Phase-Change Memory (PCM) device model.
We train a spiking RNN whose weights are emulated in the presented simulation framework, using a recently proposed e-prop learning rule.
arXiv Detail & Related papers (2021-08-04T01:24:17Z) - ShiftAddNet: A Hardware-Inspired Deep Network [87.18216601210763]
ShiftAddNet is an energy-efficient multiplication-less deep neural network.
It leads to both energy-efficient inference and training, without compromising expressive capacity.
ShiftAddNet aggressively reduces over 80% hardware-quantified energy cost of DNNs training and inference, while offering comparable or better accuracies.
arXiv Detail & Related papers (2020-10-24T05:09:14Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.