Prospects for Analog Circuits in Deep Networks
- URL: http://arxiv.org/abs/2106.12444v1
- Date: Wed, 23 Jun 2021 14:49:21 GMT
- Title: Prospects for Analog Circuits in Deep Networks
- Authors: Shih-Chii Liu, John Paul Strachan, Arindam Basu
- Abstract summary: Operations typically used in machine learning al-gorithms can be implemented by compact analog circuits.
With the recent advances in deep learning algorithms, focus has shifted to hardware digital accelerator designs.
This paper presents abrief review of analog designs that implement various machine learning algorithms.
- Score: 14.280112591737199
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Operations typically used in machine learning al-gorithms (e.g. adds and soft
max) can be implemented bycompact analog circuits. Analog Application-Specific
Integrated Circuit (ASIC) designs that implement these algorithms using
techniques such as charge sharing circuits and subthreshold transistors,
achieve very high power efficiencies. With the recent advances in deep learning
algorithms, focus has shifted to hardware digital accelerator designs that
implement the prevalent matrix-vector multiplication operations. Power in these
designs is usually dominated by the memory access power of off-chip DRAM needed
for storing the network weights and activations. Emerging dense non-volatile
memory technologies can help to provide on-chip memory and analog circuits can
be well suited to implement the needed multiplication-vector operations coupled
with in-computing memory approaches. This paper presents abrief review of
analog designs that implement various machine learning algorithms. It then
presents an outlook for the use ofanalog circuits in low-power deep network
accelerators suitable for edge or tiny machine learning applications.
Related papers
- Towards training digitally-tied analog blocks via hybrid gradient computation [1.800676987432211]
We introduce Feedforward-tied Energy-based Models (ff-EBMs)
We derive a novel algorithm to compute gradients end-to-end in ff-EBMs by backpropagating and "eq-propagating" through feedforward and energy-based parts respectively.
Our approach offers a principled, scalable, and incremental roadmap to gradually integrate self-trainable analog computational primitives into existing digital accelerators.
arXiv Detail & Related papers (2024-09-05T07:22:19Z) - CircuitVAE: Efficient and Scalable Latent Circuit Optimization [22.93567682576068]
CircuitVAE is a search algorithm that embeds computation graphs in a continuous space.
Our algorithm is highly sample-efficient, yet gracefully scales to large problem instances and high sample budgets.
We find CircuitVAE can design state-of-the-art adders in a real-world chip, demonstrating that our method can outperform commercial tools in a realistic setting.
arXiv Detail & Related papers (2024-06-13T18:47:52Z) - Efficient and accurate neural field reconstruction using resistive memory [52.68088466453264]
Traditional signal reconstruction methods on digital computers face both software and hardware challenges.
We propose a systematic approach with software-hardware co-optimizations for signal reconstruction from sparse inputs.
This work advances the AI-driven signal restoration technology and paves the way for future efficient and robust medical AI and 3D vision applications.
arXiv Detail & Related papers (2024-04-15T09:33:09Z) - CktGNN: Circuit Graph Neural Network for Electronic Design Automation [67.29634073660239]
This paper presents a Circuit Graph Neural Network (CktGNN) that simultaneously automates the circuit topology generation and device sizing.
We introduce Open Circuit Benchmark (OCB), an open-sourced dataset that contains $10$K distinct operational amplifiers.
Our work paves the way toward a learning-based open-sourced design automation for analog circuits.
arXiv Detail & Related papers (2023-08-31T02:20:25Z) - Reliability-Aware Deployment of DNNs on In-Memory Analog Computing
Architectures [0.0]
In-Memory Analog Computing (IMAC) circuits remove the need for signal converters by realizing both MVM and NLV operations in the analog domain.
We introduce a practical approach to deploy large matrices in deep neural networks (DNNs) onto multiple smaller IMAC subarrays to alleviate the impacts of noise and parasitics.
arXiv Detail & Related papers (2022-10-02T01:43:35Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - AM-DCGAN: Analog Memristive Hardware Accelerator for Deep Convolutional
Generative Adversarial Networks [3.4806267677524896]
We present a fully analog hardware design of Deep Convolutional GAN (DCGAN) based on CMOS-memristive convolutional and deconvolutional networks simulated using 180nm CMOS technology.
arXiv Detail & Related papers (2020-06-20T15:37:29Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.