One-step regression and classification with crosspoint resistive memory
arrays
- URL: http://arxiv.org/abs/2005.01988v1
- Date: Tue, 5 May 2020 08:00:07 GMT
- Title: One-step regression and classification with crosspoint resistive memory
arrays
- Authors: Zhong Sun, Giacomo Pedretti, Alessandro Bricalli, Daniele Ielmini
- Abstract summary: High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
- Score: 62.997667081978825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning has been getting a large attention in the recent years, as a
tool to process big data generated by ubiquitous sensors in our daily life.
High speed, low energy computing machines are in demand to enable real-time
artificial intelligence at the edge, i.e., without the support of a remote
frame server in the cloud. Such requirements challenge the complementary
metal-oxide-semiconductor (CMOS) technology, which is limited by the Moore's
law approaching its end and the communication bottleneck in conventional
computing architecture. Novel computing concepts, architectures and devices are
thus strongly needed to accelerate data-intensive applications. Here we show a
crosspoint resistive memory circuit with feedback configuration can execute
linear regression and logistic regression in just one step by computing the
pseudoinverse matrix of the data within the memory. The most elementary
learning operation, that is the regression of a sequence of data and the
classification of a set of data, can thus be executed in one single
computational step by the novel technology. One-step learning is further
supported by simulations of the prediction of the cost of a house in Boston and
the training of a 2-layer neural network for MNIST digit recognition. The
results are all obtained in one computational step, thanks to the physical,
parallel, and analog computing within the crosspoint array.
Related papers
- Memory Is All You Need: An Overview of Compute-in-Memory Architectures for Accelerating Large Language Model Inference [2.9302211589186244]
Large language models (LLMs) have transformed natural language processing, enabling machines to generate human-like text and engage in meaningful conversations.
Developments in computing and memory capabilities are lagging behind, exacerbated by the discontinuation of Moore's law.
compute-in-memory (CIM) technologies offer a promising solution for accelerating AI inference by directly performing analog computations in memory.
arXiv Detail & Related papers (2024-06-12T16:57:58Z) - Efficient and accurate neural field reconstruction using resistive memory [52.68088466453264]
Traditional signal reconstruction methods on digital computers face both software and hardware challenges.
We propose a systematic approach with software-hardware co-optimizations for signal reconstruction from sparse inputs.
This work advances the AI-driven signal restoration technology and paves the way for future efficient and robust medical AI and 3D vision applications.
arXiv Detail & Related papers (2024-04-15T09:33:09Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Pruning random resistive memory for optimizing analogue AI [54.21621702814583]
AI models present unprecedented challenges to energy consumption and environmental sustainability.
One promising solution is to revisit analogue computing, a technique that predates digital computing.
Here, we report a universal solution, software-hardware co-design using structural plasticity-inspired edge pruning.
arXiv Detail & Related papers (2023-11-13T08:59:01Z) - A Deep Neural Network Deployment Based on Resistive Memory Accelerator
Simulation [0.0]
The objective of this study is to illustrate the process of training a Deep Neural Network (DNN) within a Resistive RAM (ReRAM)
The CrossSim API is designed to simulate neural networks while taking into account factors that may affect the accuracy of solutions.
arXiv Detail & Related papers (2023-04-22T07:29:02Z) - A Co-design view of Compute in-Memory with Non-Volatile Elements for
Neural Networks [12.042322495445196]
We discuss how compute-in-memory can play an important part in the next generation of computing hardware.
A non-volatile memory based cross-bar architecture forms the heart of an engine that uses an analog process to parallelize the matrix vector multiplication operation.
The cross-bar architecture, at times referred to as a neuromorphic approach, can be a key hardware element in future computing machines.
arXiv Detail & Related papers (2022-06-03T15:59:46Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Monolithic Silicon Photonic Architecture for Training Deep Neural
Networks with Direct Feedback Alignment [0.6501025489527172]
We propose on-chip training of neural networks enabled by a CMOS-compatible silicon photonic architecture.
Our scheme employs the direct feedback alignment training algorithm, which trains neural networks using error feedback rather than error backpropagation.
We experimentally demonstrate training a deep neural network with the MNIST dataset using on-chip MAC operation results.
arXiv Detail & Related papers (2021-11-12T18:31:51Z) - Hybrid In-memory Computing Architecture for the Training of Deep Neural
Networks [5.050213408539571]
We propose a hybrid in-memory computing architecture for the training of deep neural networks (DNNs) on hardware accelerators.
We show that HIC-based training results in about 50% less inference model size to achieve baseline comparable accuracy.
Our simulations indicate HIC-based training naturally ensures that the number of write-erase cycles seen by the devices is a small fraction of the endurance limit of PCM.
arXiv Detail & Related papers (2021-02-10T05:26:27Z) - Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic
Circuits [99.59941892183454]
We propose Einsum Networks (EiNets), a novel implementation design for PCs.
At their core, EiNets combine a large number of arithmetic operations in a single monolithic einsum-operation.
We show that the implementation of Expectation-Maximization (EM) can be simplified for PCs, by leveraging automatic differentiation.
arXiv Detail & Related papers (2020-04-13T23:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.