An Adaptive Synaptic Array using Fowler-Nordheim Dynamic Analog Memory
- URL: http://arxiv.org/abs/2104.05926v1
- Date: Tue, 13 Apr 2021 04:08:04 GMT
- Title: An Adaptive Synaptic Array using Fowler-Nordheim Dynamic Analog Memory
- Authors: Darshit Mehta, Kenji Aono and Shantanu Chakrabartty
- Abstract summary: We present a synaptic array that uses dynamical states to implement an analog memory for energy-efficient training of machine learning (ML) systems.
With the energy-dissipation as low as 5 fJ per memory update and a programming resolution up to 14 bits, the proposed synapse array could be used to address the energy-efficiency imbalance between the training and the inference phases.
- Score: 6.681943980068049
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we present a synaptic array that uses dynamical states to
implement an analog memory for energy-efficient training of machine learning
(ML) systems. Each of the analog memory elements is a micro-dynamical system
that is driven by the physics of Fowler-Nordheim (FN) quantum tunneling,
whereas the system level learning modulates the state trajectory of the memory
ensembles towards the optimal solution. We show that the extrinsic energy
required for modulation can be matched to the dynamics of learning and weight
decay leading to a significant reduction in the energy-dissipated during ML
training. With the energy-dissipation as low as 5 fJ per memory update and a
programming resolution up to 14 bits, the proposed synapse array could be used
to address the energy-efficiency imbalance between the training and the
inference phases observed in artificial intelligence (AI) systems.
Related papers
- Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Energy-efficiency Limits on Training AI Systems using Learning-in-Memory [5.44286162776241]
We derive new theoretical lower bounds on energy dissipation when training AI systems using different Learning-in-memory approaches.
Our projections suggest that the energy-dissipation lower-bound to train a brain scale AI system using LIM is $108 sim 109$ Joules.
arXiv Detail & Related papers (2024-02-21T21:02:11Z) - CIMulator: A Comprehensive Simulation Platform for Computing-In-Memory
Circuit Macros with Low Bit-Width and Real Memory Materials [0.5325753548715747]
This paper presents a simulation platform, namely CIMulator, for quantifying the efficacy of various synaptic devices in neuromorphic accelerators.
Non-volatile memory devices, such as resistive random-access memory, ferroelectric field-effect transistor, and volatile static random-access memory devices, can be selected as synaptic devices.
A multilayer perceptron and convolutional neural networks (CNNs), such as LeNet-5, VGG-16, and a custom CNN named C4W-1, are simulated to evaluate the effects of these synaptic devices on the training and inference outcomes.
arXiv Detail & Related papers (2023-06-26T12:36:07Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - ETLP: Event-based Three-factor Local Plasticity for online learning with
neuromorphic hardware [105.54048699217668]
We show a competitive performance in accuracy with a clear advantage in the computational complexity for Event-Based Three-factor Local Plasticity (ETLP)
We also show that when using local plasticity, threshold adaptation in spiking neurons and a recurrent topology are necessary to learntemporal patterns with a rich temporal structure.
arXiv Detail & Related papers (2023-01-19T19:45:42Z) - Gradient-based Neuromorphic Learning on Dynamical RRAM Arrays [3.5969667977870796]
We present MEMprop, the adoption of gradient-based learning to train fully memristive spiking neural networks (MSNNs)
Our approach harnesses intrinsic device dynamics to trigger naturally arising voltage spikes.
We obtain highly competitive accuracy amongst previously reported lightweight dense fully MSNNs on several benchmarks.
arXiv Detail & Related papers (2022-06-26T23:13:34Z) - A Fully Memristive Spiking Neural Network with Unsupervised Learning [2.8971214387667494]
The system is fully memristive in that both neuronal and synaptic dynamics can be realized by using memristors.
The proposed MSNN implements STDP learning by using cumulative weight changes in memristive synapses from the voltage waveform changes across the synapses.
arXiv Detail & Related papers (2022-03-02T21:16:46Z) - Energy Efficient Learning with Low Resolution Stochastic Domain Wall
Synapse Based Deep Neural Networks [0.9176056742068814]
We demonstrate that extremely low resolution quantized (nominally 5-state) synapses with large variations in Domain Wall (DW) position can be both energy efficient and achieve reasonably high testing accuracies.
We show that by implementing suitable modifications to the learning algorithms, we can address the behavior as well as the effect of their low-resolution to achieve high testing accuracies.
arXiv Detail & Related papers (2021-11-14T09:12:29Z) - DySMHO: Data-Driven Discovery of Governing Equations for Dynamical
Systems via Moving Horizon Optimization [77.34726150561087]
We introduce Discovery of Dynamical Systems via Moving Horizon Optimization (DySMHO), a scalable machine learning framework.
DySMHO sequentially learns the underlying governing equations from a large dictionary of basis functions.
Canonical nonlinear dynamical system examples are used to demonstrate that DySMHO can accurately recover the governing laws.
arXiv Detail & Related papers (2021-07-30T20:35:03Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.