NeuSpin: Design of a Reliable Edge Neuromorphic System Based on
Spintronics for Green AI
- URL: http://arxiv.org/abs/2401.06195v1
- Date: Thu, 11 Jan 2024 13:27:19 GMT
- Title: NeuSpin: Design of a Reliable Edge Neuromorphic System Based on
Spintronics for Green AI
- Authors: Soyed Tuhin Ahmed, Kamal Danouchi, Guillaume Prenat, Lorena Anghel,
Mehdi B. Tahoori
- Abstract summary: Internet of Things (IoT) and smart wearable devices for personalized healthcare will require storing and computing ever-increasing amounts of data.
The key requirements for these devices are ultra-low-power, high-processing capabilities, autonomy at low cost, as well as reliability and accuracy to enable Green AI at the edge.
The NeuSPIN project aims to address these challenges through full-stack hardware and software co-design, developing novel algorithmic and circuit design approaches to enhance the performance, energy-efficiency and robustness of BayNNs on sprint-based CIM platforms.
- Score: 0.22499166814992438
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Internet of Things (IoT) and smart wearable devices for personalized
healthcare will require storing and computing ever-increasing amounts of data.
The key requirements for these devices are ultra-low-power, high-processing
capabilities, autonomy at low cost, as well as reliability and accuracy to
enable Green AI at the edge. Artificial Intelligence (AI) models, especially
Bayesian Neural Networks (BayNNs) are resource-intensive and face challenges
with traditional computing architectures due to the memory wall problem.
Computing-in-Memory (CIM) with emerging resistive memories offers a solution by
combining memory blocks and computing units for higher efficiency and lower
power consumption. However, implementing BayNNs on CIM hardware, particularly
with spintronic technologies, presents technical challenges due to variability
and manufacturing defects. The NeuSPIN project aims to address these challenges
through full-stack hardware and software co-design, developing novel
algorithmic and circuit design approaches to enhance the performance,
energy-efficiency and robustness of BayNNs on sprintronic-based CIM platforms.
Related papers
- The Reliability Issue in ReRam-based CIM Architecture for SNN: A Survey [11.935228413907875]
Spiking Neural Networks (SNNs) offer a promising alternative by mimicking biological neural networks, enabling energy-efficient computation.
ReRAM and Compute-in-Memory (CIM) architectures aim to overcome the Von Neumann bottleneck by integrating storage and computation.
This survey explores the intersection of SNNs and ReRAM-based CIM architectures, focusing on the reliability challenges that arise from device-level variations and operational errors.
arXiv Detail & Related papers (2024-11-30T16:03:24Z) - Sparsity-Aware Hardware-Software Co-Design of Spiking Neural Networks: An Overview [1.0499611180329804]
Spiking Neural Networks (SNNs) are inspired by the sparse and event-driven nature of biological neural processing, and offer the potential for ultra-low-power artificial intelligence.
We explore the hardware-software co-design of sparse SNNs, examining how sparsity representation, hardware architectures, and training techniques influence hardware efficiency.
Our work aims to illuminate the path towards embedded neuromorphic systems that fully exploit the computational advantages of sparse SNNs.
arXiv Detail & Related papers (2024-08-26T17:22:11Z) - Efficient and accurate neural field reconstruction using resistive memory [52.68088466453264]
Traditional signal reconstruction methods on digital computers face both software and hardware challenges.
We propose a systematic approach with software-hardware co-optimizations for signal reconstruction from sparse inputs.
This work advances the AI-driven signal restoration technology and paves the way for future efficient and robust medical AI and 3D vision applications.
arXiv Detail & Related papers (2024-04-15T09:33:09Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - Pruning random resistive memory for optimizing analogue AI [54.21621702814583]
AI models present unprecedented challenges to energy consumption and environmental sustainability.
One promising solution is to revisit analogue computing, a technique that predates digital computing.
Here, we report a universal solution, software-hardware co-design using structural plasticity-inspired edge pruning.
arXiv Detail & Related papers (2023-11-13T08:59:01Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - A perspective on physical reservoir computing with nanomagnetic devices [1.9007022664972197]
We focus on the reservoir computing paradigm, a recurrent network with a simple training algorithm suitable for computation with spintronic devices.
We review technologies and methods for developing neuromorphic spintronic devices and conclude with critical open issues to address before such devices become widely used.
arXiv Detail & Related papers (2022-12-09T13:43:21Z) - Enable Deep Learning on Mobile Devices: Methods, Systems, and
Applications [46.97774949613859]
Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI)
However, their superior performance comes at the considerable cost of computational complexity.
This paper provides an overview of efficient deep learning methods, systems and applications.
arXiv Detail & Related papers (2022-04-25T16:52:48Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Resistive Neural Hardware Accelerators [0.46198289193451136]
ReRAM-based in-memory computing has great potential in the implementation of area and power efficient inference.
The shift towards ReRAM-based in-memory computing has great potential in the implementation of area and power efficient inference.
In this survey, we review the state-of-the-art ReRAM-based Deep Neural Networks (DNNs) many-core accelerators.
arXiv Detail & Related papers (2021-09-08T21:11:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.