NEON: Enabling Efficient Support for Nonlinear Operations in Resistive
RAM-based Neural Network Accelerators
- URL: http://arxiv.org/abs/2211.05730v1
- Date: Thu, 10 Nov 2022 17:57:35 GMT
- Title: NEON: Enabling Efficient Support for Nonlinear Operations in Resistive
RAM-based Neural Network Accelerators
- Authors: Aditya Manglik, Minesh Patel, Haiyu Mao, Behzad Salami, Jisung Park,
Lois Orosa, Onur Mutlu
- Abstract summary: Resistive Random-Access Memory (RRAM) is well-suited to accelerate neural network (NN) workloads.
NEON is a novel compiler optimization to enable the end-to-end execution of the NN workload in RRAM.
- Score: 12.045126404373868
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Resistive Random-Access Memory (RRAM) is well-suited to accelerate neural
network (NN) workloads as RRAM-based Processing-in-Memory (PIM) architectures
natively support highly-parallel multiply-accumulate (MAC) operations that form
the backbone of most NN workloads. Unfortunately, NN workloads such as
transformers require support for non-MAC operations (e.g., softmax) that RRAM
cannot provide natively. Consequently, state-of-the-art works either integrate
additional digital logic circuits to support the non-MAC operations or offload
the non-MAC operations to CPU/GPU, resulting in significant performance and
energy efficiency overheads due to data movement.
In this work, we propose NEON, a novel compiler optimization to enable the
end-to-end execution of the NN workload in RRAM. The key idea of NEON is to
transform each non-MAC operation into a lightweight yet highly-accurate neural
network. Utilizing neural networks to approximate the non-MAC operations
provides two advantages: 1) We can exploit the key strength of RRAM, i.e.,
highly-parallel MAC operation, to flexibly and efficiently execute non-MAC
operations in memory. 2) We can simplify RRAM's microarchitecture by
eliminating the additional digital logic circuits while reducing the data
movement overheads. Acceleration of the non-MAC operations in memory enables
NEON to achieve a 2.28x speedup compared to an idealized digital logic-based
RRAM. We analyze the trade-offs associated with the transformation and
demonstrate feasible use cases for NEON across different substrates.
Related papers
- BasisN: Reprogramming-Free RRAM-Based In-Memory-Computing by Basis Combination for Deep Neural Networks [9.170451418330696]
We propose BasisN framework to accelerate deep neural networks (DNNs) on any number of crossbars without reprogramming.
We show that cycles per inference and energy-delay product were reduced to below 1% compared with applying reprogramming on crossbars.
arXiv Detail & Related papers (2024-07-04T08:47:05Z) - Efficient and accurate neural field reconstruction using resistive memory [52.68088466453264]
Traditional signal reconstruction methods on digital computers face both software and hardware challenges.
We propose a systematic approach with software-hardware co-optimizations for signal reconstruction from sparse inputs.
This work advances the AI-driven signal restoration technology and paves the way for future efficient and robust medical AI and 3D vision applications.
arXiv Detail & Related papers (2024-04-15T09:33:09Z) - Logic Design of Neural Networks for High-Throughput and Low-Power
Applications [4.964773661192363]
We propose to flatten and implement all the operations at neurons, e.g., MAC and ReLU, in a neural network with their corresponding logic circuits.
The weight values are embedded into the MAC units to simplify the logic, which can reduce the delay of the MAC units and the power consumption incurred by weight movement.
In addition, we propose a hardware-aware training method to reduce the area of logic designs of neural networks.
arXiv Detail & Related papers (2023-09-19T10:45:46Z) - INR-Arch: A Dataflow Architecture and Compiler for Arbitrary-Order
Gradient Computations in Implicit Neural Representation Processing [66.00729477511219]
Given a function represented as a computation graph, traditional architectures face challenges in efficiently computing its nth-order gradient.
We introduce INR-Arch, a framework that transforms the computation graph of an nth-order gradient into a hardware-optimized dataflow architecture.
We present results that demonstrate 1.8-4.8x and 1.5-3.6x speedup compared to CPU and GPU baselines respectively.
arXiv Detail & Related papers (2023-08-11T04:24:39Z) - DAISM: Digital Approximate In-SRAM Multiplier-based Accelerator for DNN
Training and Inference [4.718504401468233]
PIM solutions rely either on novel memory technologies that have yet to mature or bit-serial computations that have significant performance overhead and scalability issues.
Our work proposes an in-SRAM digital multiplier, that uses a conventional memory to perform bit-parallel computations, leveraging multiple wordlines activation.
We then introduce DAISM, an architecture leveraging this multiplier, which achieves up to two orders of magnitude higher area efficiency compared to the SOTA counterparts, with competitive energy efficiency.
arXiv Detail & Related papers (2023-05-12T10:58:21Z) - A 65nm 8b-Activation 8b-Weight SRAM-Based Charge-Domain Computing-in-Memory Macro Using A Fully-Parallel Analog Adder Network and A Single-ADC Interface [16.228299091691873]
Computing-in-memory (CiM) is a promising mitigation approach by enabling multiply-accumulate operations within the memory.
This work achieves 51.2GOPS throughput and 10.3TOPS/W energy efficiency, while showing 88.6% accuracy in the CIFAR-10 dataset.
arXiv Detail & Related papers (2022-11-23T07:52:10Z) - Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and
Algorithm Co-design [66.39546326221176]
Attention-based neural networks have become pervasive in many AI tasks.
The use of the attention mechanism and feed-forward network (FFN) demands excessive computational and memory resources.
This paper proposes a hardware-friendly variant that adopts a unified butterfly sparsity pattern to approximate both the attention mechanism and the FFNs.
arXiv Detail & Related papers (2022-09-20T09:28:26Z) - MAC-DO: An Efficient Output-Stationary GEMM Accelerator for CNNs Using
DRAM Technology [2.918940961856197]
This paper presents MAC-DO, an efficient and low-power DRAM-based in-situ accelerator.
It supports a multi-bit multiply-accumulate (MAC) operation within a single cycle.
A MAC-DO array efficiently can accelerate matrix multiplications based on output stationary mapping, supporting the majority of computations performed in deep neural networks (DNNs)
arXiv Detail & Related papers (2022-07-16T07:33:20Z) - GPU-Accelerated Machine Learning in Non-Orthogonal Multiple Access [71.58925117604039]
Non-orthogonal multiple access (NOMA) is an interesting technology that enables massive connectivity as required in future 5G and 6G networks.
We propose a neural network architecture that combines the advantages of both linear and non-linear processing.
arXiv Detail & Related papers (2022-06-13T09:38:23Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and
Training [82.35376405568975]
Deep neural networks (DNNs) come with heavy parameterization, leading to external dynamic random-access memory (DRAM) for storage.
We present SmartDeal (SD), an algorithm framework to trade higher-cost memory storage/access for lower-cost computation.
We show that SD leads to 10.56x and 4.48x reduction in the storage and training energy, with negligible accuracy loss compared to state-of-the-art training baselines.
arXiv Detail & Related papers (2021-01-04T18:54:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.