Memristive Stochastic Computing for Deep Learning Parameter Optimization
- URL: http://arxiv.org/abs/2103.06506v1
- Date: Thu, 11 Mar 2021 07:10:32 GMT
- Title: Memristive Stochastic Computing for Deep Learning Parameter Optimization
- Authors: Corey Lammie, Jason K. Eshraghian, Wei D. Lu, Mostafa Rahimi Azghadi
- Abstract summary: Computing (SC) is a computing paradigm that allows for the low-cost and low-power of various arithmetic operations using bit streams and digital logic.
We demonstrate that in using a 40-nm Complementary Metal Oxide Semiconductor (CMOS) process our scalable architecture occupies 1.55mm$2$ and consumes approximately 167$mu$W when optimizing parameters of a Convolutional Neural Network (CNN) while it is being trained for a character recognition task, observing no notable reduction in accuracy post-training.
- Score: 1.6344851071810071
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stochastic Computing (SC) is a computing paradigm that allows for the
low-cost and low-power computation of various arithmetic operations using
stochastic bit streams and digital logic. In contrast to conventional
representation schemes used within the binary domain, the sequence of bit
streams in the stochastic domain is inconsequential, and computation is usually
non-deterministic. In this brief, we exploit the stochasticity during switching
of probabilistic Conductive Bridging RAM (CBRAM) devices to efficiently
generate stochastic bit streams in order to perform Deep Learning (DL)
parameter optimization, reducing the size of Multiply and Accumulate (MAC)
units by 5 orders of magnitude. We demonstrate that in using a 40-nm
Complementary Metal Oxide Semiconductor (CMOS) process our scalable
architecture occupies 1.55mm$^2$ and consumes approximately 167$\mu$W when
optimizing parameters of a Convolutional Neural Network (CNN) while it is being
trained for a character recognition task, observing no notable reduction in
accuracy post-training.
Related papers
- UIO-LLMs: Unbiased Incremental Optimization for Long-Context LLMs [111.12010207132204]
UIO-LLMs is an incremental optimization approach for memory-enhanced transformers under long-context settings.
We refine the training process using the Truncated Backpropagation Through Time (TBPTT) algorithm.
UIO-LLMs successfully handle long context, such as extending the context window of Llama2-7b-chat from 4K to 100K tokens with minimal 2% additional parameters.
arXiv Detail & Related papers (2024-06-26T08:44:36Z) - A Distributed Data-Parallel PyTorch Implementation of the Distributed
Shampoo Optimizer for Training Neural Networks At-Scale [5.206015354543744]
Shampoo is an online and optimization algorithm belonging to the AdaGrad family of methods for training neural networks.
We provide a complete description of the algorithm as well as the performance optimizations that our implementation leverages to train deep networks at-scale in PyTorch.
arXiv Detail & Related papers (2023-09-12T18:11:10Z) - ADC/DAC-Free Analog Acceleration of Deep Neural Networks with Frequency
Transformation [2.7488316163114823]
This paper proposes a novel approach to an energy-efficient acceleration of frequency-domain neural networks by utilizing analog-domain frequency-based tensor transformations.
Our approach achieves more compact cells by eliminating the need for trainable parameters in the transformation matrix.
On a 16$times$16 crossbars, for 8-bit input processing, the proposed approach achieves the energy efficiency of 1602 tera operations per second per Watt.
arXiv Detail & Related papers (2023-09-04T19:19:39Z) - Randomized Polar Codes for Anytime Distributed Machine Learning [66.46612460837147]
We present a novel distributed computing framework that is robust to slow compute nodes, and is capable of both approximate and exact computation of linear operations.
We propose a sequential decoding algorithm designed to handle real valued data while maintaining low computational complexity for recovery.
We demonstrate the potential applications of this framework in various contexts, such as large-scale matrix multiplication and black-box optimization.
arXiv Detail & Related papers (2023-09-01T18:02:04Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Machine Learning-Based Optimal Mesh Generation in Computational Fluid
Dynamics [0.0]
We propose a machine learning approach to identify optimal mesh densities.
We generate optimized meshes using classical methodologies and propose to train a convolutional network predicting optimal mesh densities.
Using a training set of 20,000 simulations we achieve accuracies of more than 98.7%.
arXiv Detail & Related papers (2021-02-25T15:25:17Z) - Efficient Learning of Generative Models via Finite-Difference Score
Matching [111.55998083406134]
We present a generic strategy to efficiently approximate any-order directional derivative with finite difference.
Our approximation only involves function evaluations, which can be executed in parallel, and no gradient computations.
arXiv Detail & Related papers (2020-07-07T10:05:01Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z) - ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for
Deep Learning [1.2019888796331233]
Matrix-vector multiplications (MVM) and vector-vector outer product (VVOP) are the two most expensive operations associated with the training of deep neural networks (DNNs)
We introduce efficient techniques to SC for weight update in DNNs with the activation functions required by many state-of-the-art networks.
Our architecture reduces the computational cost by re-using random numbers and replacing certain FP multiplication operations by bit shift scaling.
Hardware design of ESSOP at 14nm technology node shows that, compared to a highly pipelined FP16 multiplier, ESSOP is 82.2% and 93.7% better in energy
arXiv Detail & Related papers (2020-03-25T07:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.