HyperSNN: A new efficient and robust deep learning model for resource
constrained control applications
- URL: http://arxiv.org/abs/2308.08222v2
- Date: Thu, 17 Aug 2023 04:20:28 GMT
- Title: HyperSNN: A new efficient and robust deep learning model for resource
constrained control applications
- Authors: Zhanglu Yan, Shida Wang, Kaiwen Tang, Weng-Fai Wong
- Abstract summary: HyperSNN is an innovative method for control tasks that uses spiking neural networks (SNNs) in combination with hyperdimensional computing.
Our model was tested on AI Gym benchmarks, including Cartpole, Acrobot, MountainCar, and Lunar Lander.
- Score: 4.8915861089531205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In light of the increasing adoption of edge computing in areas such as
intelligent furniture, robotics, and smart homes, this paper introduces
HyperSNN, an innovative method for control tasks that uses spiking neural
networks (SNNs) in combination with hyperdimensional computing. HyperSNN
substitutes expensive 32-bit floating point multiplications with 8-bit integer
additions, resulting in reduced energy consumption while enhancing robustness
and potentially improving accuracy. Our model was tested on AI Gym benchmarks,
including Cartpole, Acrobot, MountainCar, and Lunar Lander. HyperSNN achieves
control accuracies that are on par with conventional machine learning methods
but with only 1.36% to 9.96% of the energy expenditure. Furthermore, our
experiments showed increased robustness when using HyperSNN. We believe that
HyperSNN is especially suitable for interactive, mobile, and wearable devices,
promoting energy-efficient and robust system design. Furthermore, it paves the
way for the practical implementation of complex algorithms like model
predictive control (MPC) in real-world industrial scenarios.
Related papers
- SNN4Agents: A Framework for Developing Energy-Efficient Embodied Spiking Neural Networks for Autonomous Agents [6.110543738208028]
Spiking Neural Networks (SNNs) use spikes from event-based cameras or data conversion pre-processing to perform sparse computations efficiently.
We propose a novel framework called SNN4Agents that consists of a set of optimization techniques for designing energy-efficient embodied SNNs.
Our framework can maintain high accuracy (i.e., 84.12% accuracy) with 68.75% memory saving, 3.58x speed-up, and 4.03x energy efficiency improvement.
arXiv Detail & Related papers (2024-04-14T19:06:00Z) - A Cloud-Edge Framework for Energy-Efficient Event-Driven Control: An Integration of Online Supervised Learning, Spiking Neural Networks and Local Plasticity Rules [0.0]
This paper presents a novel cloud-edge framework for addressing computational and energy constraints in complex control systems.
By integrating a biologically plausible learning method with local plasticity rules, we harness the efficiency, scalability, and low latency of Spiking Neural Networks (SNNs)
This design replicates control signals from a cloud-based controller directly on the plant, reducing the need for constant plant-cloud communication.
arXiv Detail & Related papers (2024-04-12T22:34:17Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Energy-Efficient On-Board Radio Resource Management for Satellite
Communications via Neuromorphic Computing [59.40731173370976]
We investigate the application of energy-efficient brain-inspired machine learning models for on-board radio resource management.
For relevant workloads, spiking neural networks (SNNs) implemented on Loihi 2 yield higher accuracy, while reducing power consumption by more than 100$times$ as compared to the CNN-based reference platform.
arXiv Detail & Related papers (2023-08-22T03:13:57Z) - The Hardware Impact of Quantization and Pruning for Weights in Spiking
Neural Networks [0.368986335765876]
quantization and pruning of parameters can both compress the model size, reduce memory footprints, and facilitate low-latency execution.
We study various combinations of pruning and quantization in isolation, cumulatively, and simultaneously to a state-of-the-art SNN targeting gesture recognition.
We show that this state-of-the-art model is amenable to aggressive parameter quantization, not suffering from any loss in accuracy down to ternary weights.
arXiv Detail & Related papers (2023-02-08T16:25:20Z) - Deep Reinforcement Learning with Spiking Q-learning [51.386945803485084]
spiking neural networks (SNNs) are expected to realize artificial intelligence (AI) with less energy consumption.
It provides a promising energy-efficient way for realistic control tasks by combining SNNs with deep reinforcement learning (RL)
arXiv Detail & Related papers (2022-01-21T16:42:11Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - AdderNet and its Minimalist Hardware Design for Energy-Efficient
Artificial Intelligence [111.09105910265154]
We present a novel minimalist hardware architecture using adder convolutional neural network (AdderNet)
The whole AdderNet can practically achieve 16% enhancement in speed.
We conclude the AdderNet is able to surpass all the other competitors.
arXiv Detail & Related papers (2021-01-25T11:31:52Z) - ShiftAddNet: A Hardware-Inspired Deep Network [87.18216601210763]
ShiftAddNet is an energy-efficient multiplication-less deep neural network.
It leads to both energy-efficient inference and training, without compromising expressive capacity.
ShiftAddNet aggressively reduces over 80% hardware-quantified energy cost of DNNs training and inference, while offering comparable or better accuracies.
arXiv Detail & Related papers (2020-10-24T05:09:14Z) - Deep Reinforcement Learning with Population-Coded Spiking Neural Network
for Continuous Control [0.0]
We propose a population-coded spiking actor network (PopSAN) trained in conjunction with a deep critic network using deep reinforcement learning (DRL)
We deployed the trained PopSAN on Intel's Loihi neuromorphic chip and benchmarked our method against the mainstream DRL algorithms for continuous control.
Our results support the efficiency of neuromorphic controllers and suggest our hybrid RL as an alternative to deep learning, when both energy-efficiency and robustness are important.
arXiv Detail & Related papers (2020-10-19T16:20:45Z) - FSpiNN: An Optimization Framework for Memory- and Energy-Efficient
Spiking Neural Networks [14.916996986290902]
Spiking Neural Networks (SNNs) offer unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule.
However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy.
We propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing.
arXiv Detail & Related papers (2020-07-17T09:40:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.