NN-PARS: A Parallelized Neural Network Based Circuit Simulation
Framework
- URL: http://arxiv.org/abs/2002.05292v1
- Date: Thu, 13 Feb 2020 00:34:31 GMT
- Title: NN-PARS: A Parallelized Neural Network Based Circuit Simulation
Framework
- Authors: Mohammad Saeed Abrishami, Hao Ge, Justin F. Calderon, Massoud Pedram,
Shahin Nazarian
- Abstract summary: Existing circuit simulators are either slow or inaccurate in analyzing the nonlinear behavior of designs with billions of transistors.
We present NN-PARS, a neural network (NN) based and parallelized circuit simulation framework with optimized event-driven scheduling of simulation tasks.
Experimental results show that compared to a state-of-the-art current-based simulation method, NN-PARS reduces the simulation time by over two orders of magnitude in large circuits.
- Score: 6.644753932694431
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The shrinking of transistor geometries as well as the increasing complexity
of integrated circuits, significantly aggravate nonlinear design behavior. This
demands accurate and fast circuit simulation to meet the design quality and
time-to-market constraints. The existing circuit simulators which utilize
lookup tables and/or closed-form expressions are either slow or inaccurate in
analyzing the nonlinear behavior of designs with billions of transistors. To
address these shortcomings, we present NN-PARS, a neural network (NN) based and
parallelized circuit simulation framework with optimized event-driven
scheduling of simulation tasks to maximize concurrency, according to the
underlying GPU parallel processing capabilities. NN-PARS replaces the required
memory queries in traditional techniques with parallelized NN-based computation
tasks. Experimental results show that compared to a state-of-the-art
current-based simulation method, NN-PARS reduces the simulation time by over
two orders of magnitude in large circuits. NN-PARS also provides high accuracy
levels in signal waveform calculations, with less than $2\%$ error compared to
HSPICE.
Related papers
- Enhancing Open Quantum Dynamics Simulations Using Neural Network-Based Non-Markovian Stochastic Schrödinger Equation Method [2.9413085575648235]
We propose a scheme that combines neural network techniques with simulations of the non-Markovian Schrodinger equation.
This approach significantly reduces the number of trajectories required for long-time simulations, particularly at low temperatures.
arXiv Detail & Related papers (2024-11-24T16:57:07Z) - A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - A Fast Algorithm to Simulate Nonlinear Resistive Networks [0.6526824510982799]
We introduce a novel approach for the simulation of nonlinear resistive networks, which we frame as a quadratic programming problem with linear inequality constraints.
Our simulation methodology significantly outperforms existing SPICE-based simulations, enabling the training of networks up to 327 times larger at speeds 160 times faster.
arXiv Detail & Related papers (2024-02-18T18:33:48Z) - RWKV: Reinventing RNNs for the Transformer Era [54.716108899349614]
We propose a novel model architecture that combines the efficient parallelizable training of transformers with the efficient inference of RNNs.
We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers.
arXiv Detail & Related papers (2023-05-22T13:57:41Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Scalable Nanophotonic-Electronic Spiking Neural Networks [3.9918594409417576]
Spiking neural networks (SNN) provide a new computational paradigm capable of highly parallelized, real-time processing.
Photonic devices are ideal for the design of high-bandwidth, parallel architectures matching the SNN computational paradigm.
Co-integrated CMOS and SiPh technologies are well-suited to the design of scalable SNN computing architectures.
arXiv Detail & Related papers (2022-08-28T06:10:06Z) - Parallel Simulation of Quantum Networks with Distributed Quantum State
Management [56.24769206561207]
We identify requirements for parallel simulation of quantum networks and develop the first parallel discrete event quantum network simulator.
Our contributions include the design and development of a quantum state manager that maintains shared quantum information distributed across multiple processes.
We release the parallel SeQUeNCe simulator as an open-source tool alongside the existing sequential version.
arXiv Detail & Related papers (2021-11-06T16:51:17Z) - Implementing efficient balanced networks with mixed-signal spike-based
learning circuits [2.1640200483378953]
Efficient Balanced Networks (EBNs) are networks of spiking neurons in which excitatory and inhibitory synaptic currents are balanced on a short timescale.
We develop a novel local learning rule suitable for on-chip implementation that drives a randomly connected network of spiking neurons into a tightly balanced regime.
Thanks to their coding properties and sparse activity, neuromorphic electronic EBNs will be ideally suited for extreme-edge computing applications.
arXiv Detail & Related papers (2020-10-27T15:05:51Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - CSM-NN: Current Source Model Based Logic Circuit Simulation -- A Neural
Network Approach [5.365198933008246]
CSM-NN is a scalable simulation framework with optimized neural network structures and processing algorithms.
Experiments show that CSM-NN reduces the simulation time by up to $6times$ compared to a state-of-the-art current source model based simulator running on a CPU.
CSM-NN also provides high accuracy levels, with less than $2%$ error, compared to HSPICE.
arXiv Detail & Related papers (2020-02-13T00:29:44Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.