Complexity-Driven CNN Compression for Resource-constrained Edge AI
- URL: http://arxiv.org/abs/2208.12816v1
- Date: Fri, 26 Aug 2022 16:01:23 GMT
- Title: Complexity-Driven CNN Compression for Resource-constrained Edge AI
- Authors: Muhammad Zawish, Steven Davy and Lizy Abraham
- Abstract summary: We propose a novel and computationally efficient pruning pipeline by exploiting the inherent layer-level complexities of CNNs.
We define three modes of pruning, namely parameter-aware (PA), FLOPs-aware (FA), and memory-aware (MA), to introduce versatile compression of CNNs.
- Score: 1.6114012813668934
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent advances in Artificial Intelligence (AI) on the Internet of Things
(IoT)-enabled network edge has realized edge intelligence in several
applications such as smart agriculture, smart hospitals, and smart factories by
enabling low-latency and computational efficiency. However, deploying
state-of-the-art Convolutional Neural Networks (CNNs) such as VGG-16 and
ResNets on resource-constrained edge devices is practically infeasible due to
their large number of parameters and floating-point operations (FLOPs). Thus,
the concept of network pruning as a type of model compression is gaining
attention for accelerating CNNs on low-power devices. State-of-the-art pruning
approaches, either structured or unstructured do not consider the different
underlying nature of complexities being exhibited by convolutional layers and
follow a training-pruning-retraining pipeline, which results in additional
computational overhead. In this work, we propose a novel and computationally
efficient pruning pipeline by exploiting the inherent layer-level complexities
of CNNs. Unlike typical methods, our proposed complexity-driven algorithm
selects a particular layer for filter-pruning based on its contribution to
overall network complexity. We follow a procedure that directly trains the
pruned model and avoids the computationally complex ranking and fine-tuning
steps. Moreover, we define three modes of pruning, namely parameter-aware (PA),
FLOPs-aware (FA), and memory-aware (MA), to introduce versatile compression of
CNNs. Our results show the competitive performance of our approach in terms of
accuracy and acceleration. Lastly, we present a trade-off between different
resources and accuracy which can be helpful for developers in making the right
decisions in resource-constrained IoT environments.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - OTOv3: Automatic Architecture-Agnostic Neural Network Training and
Compression from Structured Pruning to Erasing Operators [57.145175475579315]
This topic spans various techniques, from structured pruning to neural architecture search, encompassing both pruning and erasing operators perspectives.
We introduce the third-generation Only-Train-Once (OTOv3), which first automatically trains and compresses a general DNN through pruning and erasing operations.
Our empirical results demonstrate the efficacy of OTOv3 across various benchmarks in structured pruning and neural architecture search.
arXiv Detail & Related papers (2023-12-15T00:22:55Z) - Evolution of Convolutional Neural Network (CNN): Compute vs Memory
bandwidth for Edge AI [0.0]
This article explores the relationship between CNN compute requirements and memory bandwidth in the context of Edge AI.
We examine the impact of increasing model complexity on both computational requirements and memory access patterns.
This analysis provides insights into designing efficient architectures and potential hardware accelerators in enhancing CNN performance on edge devices.
arXiv Detail & Related papers (2023-09-24T09:11:22Z) - A Generalization of Continuous Relaxation in Structured Pruning [0.3277163122167434]
Trends indicate that deeper and larger neural networks with an increasing number of parameters achieve higher accuracy than smaller neural networks.
We generalize structured pruning with algorithms for network augmentation, pruning, sub-network collapse and removal.
The resulting CNN executes efficiently on GPU hardware without computationally expensive sparse matrix operations.
arXiv Detail & Related papers (2023-08-28T14:19:13Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Neural network relief: a pruning algorithm based on neural activity [47.57448823030151]
We propose a simple importance-score metric that deactivates unimportant connections.
We achieve comparable performance for LeNet architectures on MNIST.
The algorithm is not designed to minimize FLOPs when considering current hardware and software implementations.
arXiv Detail & Related papers (2021-09-22T15:33:49Z) - Latency-Memory Optimized Splitting of Convolution Neural Networks for
Resource Constrained Edge Devices [1.6873748786804317]
We argue that running CNNs between an edge device and the cloud is synonymous to solving a resource-constrained optimization problem.
Experiments done on real-world edge devices show that, LMOS ensures feasible execution of different CNN models at the edge.
arXiv Detail & Related papers (2021-07-19T19:39:56Z) - ALF: Autoencoder-based Low-rank Filter-sharing for Efficient
Convolutional Neural Networks [63.91384986073851]
We propose the autoencoder-based low-rank filter-sharing technique technique (ALF)
ALF shows a reduction of 70% in network parameters, 61% in operations and 41% in execution time, with minimal loss in accuracy.
arXiv Detail & Related papers (2020-07-27T09:01:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.