Theory and Implementation of Process and Temperature Scalable
Shape-based CMOS Analog Circuits
- URL: http://arxiv.org/abs/2205.05664v1
- Date: Wed, 11 May 2022 17:46:01 GMT
- Title: Theory and Implementation of Process and Temperature Scalable
Shape-based CMOS Analog Circuits
- Authors: Pratik Kumar, Ankita Nandi, Shantanu Chakrabartty, Chetan Singh Thakur
- Abstract summary: This work proposes a novel analog computing framework for designing an analog ML processor similar to that of a digital design.
At the core of our work lies shape-based analog computing (S-AC)
S-AC paradigm also allows the user to trade off computational precision with silicon circuit area and power.
- Score: 6.548257506132353
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Analog computing is attractive to its digital counterparts due to its
potential for achieving high compute density and energy efficiency. However,
the device-to-device variability and challenges in porting existing designs to
advance process nodes have posed a major hindrance in harnessing the full
potential of analog computations for Machine Learning (ML) applications. This
work proposes a novel analog computing framework for designing an analog ML
processor similar to that of a digital design - where the designs can be scaled
and ported to advanced process nodes without architectural changes. At the core
of our work lies shape-based analog computing (S-AC). It utilizes device
primitives to yield a robust proto-function through which other non-linear
shapes can be derived. S-AC paradigm also allows the user to trade off
computational precision with silicon circuit area and power. Thus allowing
users to build a truly power-efficient and scalable analog architecture where
the same synthesized analog circuit can operate across different biasing
regimes of transistors and simultaneously scale across process nodes. As a
proof of concept, we show the implementation of commonly used mathematical
functions for carrying standard ML tasks in both planar CMOS 180nm and FinFET
7nm process nodes. The synthesized Shape-based ML architecture has been
demonstrated for its classification accuracy on standard data sets at different
process nodes.
Related papers
- AnalogCoder: Analog Circuit Design via Training-Free Code Generation [28.379045024642668]
We introduce AnalogCoder, the first training-free Large Language Models agent for designing analog circuits.
It incorporates a feedback-enhanced flow with tailored domain-specific prompts, enabling the automated and self-correcting design of analog circuits.
It has successfully designed 20 circuits, 5 more than standard GPT-4o.
arXiv Detail & Related papers (2024-05-23T17:13:52Z) - On the Non-Associativity of Analog Computations [0.0]
In this work, we observe that the ordering of input operands of an analog operation also has an impact on the output result.
We conduct a simple test by creating a model of a real analog processor which captures such ordering effects.
The results prove the existence of ordering effects as well as their high impact, as neglecting ordering results in substantial accuracy drops.
arXiv Detail & Related papers (2023-09-25T17:04:09Z) - SPAIC: A sub-$\mu$W/Channel, 16-Channel General-Purpose Event-Based
Analog Front-End with Dual-Mode Encoders [6.6017549029623535]
Low-power event-based analog front-ends are crucial to build efficient neuromorphic processing systems.
We present a novel, highly analog front-end chip, denoted as SPAIC (signal-to-spike converter for analog AI computation)
It offers a general-purpose dual-mode analog signal-to-spike encoding with delta modulation and pulse frequency modulation, with tunable frequency bands.
arXiv Detail & Related papers (2023-08-31T19:53:04Z) - Transformers as Statisticians: Provable In-Context Learning with
In-Context Algorithm Selection [88.23337313766353]
This work first provides a comprehensive statistical theory for transformers to perform ICL.
We show that transformers can implement a broad class of standard machine learning algorithms in context.
A emphsingle transformer can adaptively select different base ICL algorithms.
arXiv Detail & Related papers (2023-06-07T17:59:31Z) - RWKV: Reinventing RNNs for the Transformer Era [54.716108899349614]
We propose a novel model architecture that combines the efficient parallelizable training of transformers with the efficient inference of RNNs.
We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers.
arXiv Detail & Related papers (2023-05-22T13:57:41Z) - Simulation Paths for Quantum Circuit Simulation with Decision Diagrams [72.03286471602073]
We study the importance of the path that is chosen when simulating quantum circuits using decision diagrams.
We propose an open-source framework that allows to investigate dedicated simulation paths.
arXiv Detail & Related papers (2022-03-01T19:00:11Z) - Bias-Scalable Near-Memory CMOS Analog Processor for Machine Learning [6.548257506132353]
Bias-scalable approximate analog computing is attractive for implementing machine learning (ML) processors with distinct power-performance specifications.
We demonstrate the implementation of bias-scalable approximate analog computing circuits using the generalization of the margin-propagation principle.
arXiv Detail & Related papers (2022-02-10T13:26:00Z) - Prospects for Analog Circuits in Deep Networks [14.280112591737199]
Operations typically used in machine learning al-gorithms can be implemented by compact analog circuits.
With the recent advances in deep learning algorithms, focus has shifted to hardware digital accelerator designs.
This paper presents abrief review of analog designs that implement various machine learning algorithms.
arXiv Detail & Related papers (2021-06-23T14:49:21Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic
Circuits [99.59941892183454]
We propose Einsum Networks (EiNets), a novel implementation design for PCs.
At their core, EiNets combine a large number of arithmetic operations in a single monolithic einsum-operation.
We show that the implementation of Expectation-Maximization (EM) can be simplified for PCs, by leveraging automatic differentiation.
arXiv Detail & Related papers (2020-04-13T23:09:15Z) - Efficient classical simulation of random shallow 2D quantum circuits [104.50546079040298]
Random quantum circuits are commonly viewed as hard to simulate classically.
We show that approximate simulation of typical instances is almost as hard as exact simulation.
We also conjecture that sufficiently shallow random circuits are efficiently simulable more generally.
arXiv Detail & Related papers (2019-12-31T19:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.