CODEBench: A Neural Architecture and Hardware Accelerator Co-Design
Framework
- URL: http://arxiv.org/abs/2212.03965v1
- Date: Wed, 7 Dec 2022 21:38:03 GMT
- Title: CODEBench: A Neural Architecture and Hardware Accelerator Co-Design
Framework
- Authors: Shikhar Tuli, Chia-Hao Li, Ritvik Sharma, Niraj K. Jha
- Abstract summary: This work proposes a novel neural architecture and hardware accelerator co-design framework, called CODEBench.
It is composed of two new benchmarking sub-frameworks, CNNBench and AccelBench, which explore expanded design spaces of convolutional neural networks (CNNs) and CNN accelerators.
- Score: 4.5259990830344075
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, automated co-design of machine learning (ML) models and accelerator
architectures has attracted significant attention from both the industry and
academia. However, most co-design frameworks either explore a limited search
space or employ suboptimal exploration techniques for simultaneous design
decision investigations of the ML model and the accelerator. Furthermore,
training the ML model and simulating the accelerator performance is
computationally expensive. To address these limitations, this work proposes a
novel neural architecture and hardware accelerator co-design framework, called
CODEBench. It is composed of two new benchmarking sub-frameworks, CNNBench and
AccelBench, which explore expanded design spaces of convolutional neural
networks (CNNs) and CNN accelerators. CNNBench leverages an advanced search
technique, BOSHNAS, to efficiently train a neural heteroscedastic surrogate
model to converge to an optimal CNN architecture by employing second-order
gradients. AccelBench performs cycle-accurate simulations for a diverse set of
accelerator architectures in a vast design space. With the proposed co-design
method, called BOSHCODE, our best CNN-accelerator pair achieves 1.4% higher
accuracy on the CIFAR-10 dataset compared to the state-of-the-art pair, while
enabling 59.1% lower latency and 60.8% lower energy consumption. On the
ImageNet dataset, it achieves 3.7% higher Top1 accuracy at 43.8% lower latency
and 11.2% lower energy consumption. CODEBench outperforms the state-of-the-art
framework, i.e., Auto-NBA, by achieving 1.5% higher accuracy and 34.7x higher
throughput, while enabling 11.0x lower energy-delay product (EDP) and 4.0x
lower chip area on CIFAR-10.
Related papers
- Building Efficient Lightweight CNN Models [0.0]
Convolutional Neural Networks (CNNs) are pivotal in image classification tasks due to their robust feature extraction capabilities.
This paper introduces a methodology to construct lightweight CNNs while maintaining competitive accuracy.
The proposed model achieved a state-of-the-art accuracy of 99% on the handwritten digit MNIST and 89% on fashion MNIST, with only 14,862 parameters and a model size of 0.17 MB.
arXiv Detail & Related papers (2025-01-26T14:39:01Z) - Neural Architecture Codesign for Fast Physics Applications [0.8692847090818803]
We develop a pipeline to streamline neural architecture codesign for physics applications.
We employ neural architecture search and network compression in a two-stage approach to discover hardware efficient models.
arXiv Detail & Related papers (2025-01-09T19:00:03Z) - Hardware-Software Co-optimised Fast and Accurate Deep Reconfigurable Spiking Inference Accelerator Architecture Design Methodology [2.968768532937366]
Spiking Neural Networks (SNNs) have emerged as a promising approach to improve the energy efficiency of machine learning models.
We develop a hardware-software co-optimisation strategy to port software-trained deep neural networks (DNN) to reduced-precision spiking models.
arXiv Detail & Related papers (2024-10-07T05:04:13Z) - LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection [0.0]
We focus on design choices of neural network architectures for efficient object detection based on FLOP.
We propose several optimizations to enhance the efficiency of YOLO-based models.
This paper contributes to a new scaling paradigm for object detection and YOLO-centric models called LeYOLO.
arXiv Detail & Related papers (2024-06-20T12:08:24Z) - Rethinking Mobile Block for Efficient Attention-based Models [60.0312591342016]
This paper focuses on developing modern, efficient, lightweight models for dense predictions while trading off parameters, FLOPs, and performance.
Inverted Residual Block (IRB) serves as the infrastructure for lightweight CNNs, but no counterpart has been recognized by attention-based studies.
We extend CNN-based IRB to attention-based models and abstracting a one-residual Meta Mobile Block (MMB) for lightweight model design.
arXiv Detail & Related papers (2023-01-03T15:11:41Z) - Faster Attention Is What You Need: A Fast Self-Attention Neural Network
Backbone Architecture for the Edge via Double-Condensing Attention Condensers [71.40595908386477]
We introduce a new faster attention condenser design called double-condensing attention condensers.
The resulting backbone (which we name AttendNeXt) achieves significantly higher inference throughput on an embedded ARM processor.
These promising results demonstrate that exploring different efficient architecture designs and self-attention mechanisms can lead to interesting new building blocks for TinyML applications.
arXiv Detail & Related papers (2022-08-15T02:47:33Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - NAAS: Neural Accelerator Architecture Search [16.934625310654553]
We propose Neural Accelerator Architecture Search (NAAS) to holistically search the neural network architecture, accelerator architecture, and compiler mappings.
As a data-driven approach, NAAS rivals the human design Eyeriss by 4.4x EDP reduction with 2.7% accuracy improvement on ImageNet.
arXiv Detail & Related papers (2021-05-27T15:56:41Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z) - SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost
Computation [97.78417228445883]
We present SmartExchange, an algorithm- hardware co-design framework for energy-efficient inference of deep neural networks (DNNs)
We develop a novel algorithm to enforce a specially favorable DNN weight structure, where each layerwise weight matrix can be stored as the product of a small basis matrix and a large sparse coefficient matrix whose non-zero elements are all power-of-2.
We further design a dedicated accelerator to fully utilize the SmartExchange-enforced weights to improve both energy efficiency and latency performance.
arXiv Detail & Related papers (2020-05-07T12:12:49Z) - Best of Both Worlds: AutoML Codesign of a CNN and its Hardware
Accelerator [21.765796576990137]
We automate HW-CNN codesign using NAS by including parameters from both the CNN model and the HW accelerator.
We jointly search for the best model-accelerator pair that boosts accuracy and efficiency.
arXiv Detail & Related papers (2020-02-11T10:00:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.