AM-DCGAN: Analog Memristive Hardware Accelerator for Deep Convolutional
Generative Adversarial Networks
- URL: http://arxiv.org/abs/2007.12063v1
- Date: Sat, 20 Jun 2020 15:37:29 GMT
- Title: AM-DCGAN: Analog Memristive Hardware Accelerator for Deep Convolutional
Generative Adversarial Networks
- Authors: Olga Krestinskaya, Bhaskar Choubey, Alex Pappachen James
- Abstract summary: We present a fully analog hardware design of Deep Convolutional GAN (DCGAN) based on CMOS-memristive convolutional and deconvolutional networks simulated using 180nm CMOS technology.
- Score: 3.4806267677524896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Network (GAN) is a well known computationally complex
algorithm requiring signficiant computational resources in software
implementations including large amount of data to be trained. This makes its
implementation in edge devices with conventional microprocessor hardware a slow
and difficult task. In this paper, we propose to accelerate the computationally
intensive GAN using memristive neural networks in analog domain. We present a
fully analog hardware design of Deep Convolutional GAN (DCGAN) based on
CMOS-memristive convolutional and deconvolutional networks simulated using
180nm CMOS technology.
Related papers
- Logic Design of Neural Networks for High-Throughput and Low-Power
Applications [4.964773661192363]
We propose to flatten and implement all the operations at neurons, e.g., MAC and ReLU, in a neural network with their corresponding logic circuits.
The weight values are embedded into the MAC units to simplify the logic, which can reduce the delay of the MAC units and the power consumption incurred by weight movement.
In addition, we propose a hardware-aware training method to reduce the area of logic designs of neural networks.
arXiv Detail & Related papers (2023-09-19T10:45:46Z) - Solving Large-scale Spatial Problems with Convolutional Neural Networks [88.31876586547848]
We employ transfer learning to improve training efficiency for large-scale spatial problems.
We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation.
arXiv Detail & Related papers (2023-06-14T01:24:42Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - MAPLE-X: Latency Prediction with Explicit Microprocessor Prior Knowledge [87.41163540910854]
Deep neural network (DNN) latency characterization is a time-consuming process.
We propose MAPLE-X which extends MAPLE by incorporating explicit prior knowledge of hardware devices and DNN architecture latency.
arXiv Detail & Related papers (2022-05-25T11:08:20Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Prospects for Analog Circuits in Deep Networks [14.280112591737199]
Operations typically used in machine learning al-gorithms can be implemented by compact analog circuits.
With the recent advances in deep learning algorithms, focus has shifted to hardware digital accelerator designs.
This paper presents abrief review of analog designs that implement various machine learning algorithms.
arXiv Detail & Related papers (2021-06-23T14:49:21Z) - SEMULATOR: Emulating the Dynamics of Crossbar Array-based Analog Neural
System with Regression Neural Networks [1.370633147306388]
We propose a methodology, SEMULATOR which uses a deep neural network to emulate the behavior of crossbar-based analog computing system.
With the proposed neural architecture, we experimentally and theoretically shows that it emulates a MAC unit for neural computation.
arXiv Detail & Related papers (2021-01-19T21:08:33Z) - Overview of FPGA deep learning acceleration based on convolutional
neural network [0.76146285961466]
In recent years, deep learning has become more and more mature, and as a commonly used algorithm in deep learning, convolutional neural networks have been widely used in various visual tasks.
This article is a review article, which mainly introduces the related theories and algorithms of convolution.
It summarizes the application scenarios of several existing FPGA technologies based on convolutional neural networks, and mainly introduces the application of accelerators.
arXiv Detail & Related papers (2020-12-23T12:44:24Z) - Fully-parallel Convolutional Neural Network Hardware [0.7829352305480285]
We propose a new power-and-area-efficient architecture for implementing Articial Neural Networks (ANNs) in hardware.
For the first time, a fully-parallel CNN as LENET-5 is embedded and tested in a single FPGA.
arXiv Detail & Related papers (2020-06-22T17:19:09Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.