PLiNIO: A User-Friendly Library of Gradient-based Methods for
Complexity-aware DNN Optimization
- URL: http://arxiv.org/abs/2307.09488v1
- Date: Tue, 18 Jul 2023 07:11:14 GMT
- Title: PLiNIO: A User-Friendly Library of Gradient-based Methods for
Complexity-aware DNN Optimization
- Authors: Daniele Jahier Pagliari, Matteo Risso, Beatrice Alessandra Motetti,
Alessio Burrello
- Abstract summary: PLiNIO is an open-source library implementing a comprehensive set of state-of-the-art DNN design automation techniques.
We show that PLiNIO achieves up to 94.34% memory reduction for a 1% accuracy drop compared to a baseline architecture.
- Score: 3.460496851517031
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate yet efficient Deep Neural Networks (DNNs) are in high demand,
especially for applications that require their execution on constrained edge
devices. Finding such DNNs in a reasonable time for new applications requires
automated optimization pipelines since the huge space of hyper-parameter
combinations is impossible to explore extensively by hand. In this work, we
propose PLiNIO, an open-source library implementing a comprehensive set of
state-of-the-art DNN design automation techniques, all based on lightweight
gradient-based optimization, under a unified and user-friendly interface. With
experiments on several edge-relevant tasks, we show that combining the various
optimizations available in PLiNIO leads to rich sets of solutions that
Pareto-dominate the considered baselines in terms of accuracy vs model size.
Noteworthy, PLiNIO achieves up to 94.34% memory reduction for a <1% accuracy
drop compared to a baseline architecture.
Related papers
- HESSO: Towards Automatic Efficient and User Friendly Any Neural Network Training and Pruning [38.01465387364115]
Only-Train-Once (OTO) series has been recently proposed to resolve the many pain points by streamlining the workflow.
We numerically demonstrate the efficacy of HESSO and its enhanced version HESSO-CRIC on a variety of applications.
arXiv Detail & Related papers (2024-09-11T05:28:52Z) - Towards Leveraging AutoML for Sustainable Deep Learning: A Multi-Objective HPO Approach on Deep Shift Neural Networks [16.314030132923026]
We study the impact of hyperparameter optimization (HPO) to maximize DSNN performance while minimizing resource consumption.
Experimental results demonstrate the effectiveness of our approach, resulting in models with over 80% in accuracy and low computational cost.
arXiv Detail & Related papers (2024-04-02T14:03:37Z) - Sparse-DySta: Sparsity-Aware Dynamic and Static Scheduling for Sparse
Multi-DNN Workloads [65.47816359465155]
Running multiple deep neural networks (DNNs) in parallel has become an emerging workload in both edge devices.
We propose Dysta, a novel scheduler that utilizes both static sparsity patterns and dynamic sparsity information for the sparse multi-DNN scheduling.
Our proposed approach outperforms the state-of-the-art methods with up to 10% decrease in latency constraint violation rate and nearly 4X reduction in average normalized turnaround time.
arXiv Detail & Related papers (2023-10-17T09:25:17Z) - Flexible Channel Dimensions for Differentiable Architecture Search [50.33956216274694]
We propose a novel differentiable neural architecture search method with an efficient dynamic channel allocation algorithm.
We show that the proposed framework is able to find DNN architectures that are equivalent to previous methods in task accuracy and inference latency.
arXiv Detail & Related papers (2023-06-13T15:21:38Z) - Combining Multi-Objective Bayesian Optimization with Reinforcement Learning for TinyML [4.2019872499238256]
We propose a novel strategy for deploying Deep Neural Networks on microcontrollers (TinyML) based on Multi-Objective Bayesian optimization (MOBOpt)
Our methodology aims at efficiently finding tradeoffs between a DNN's predictive accuracy, memory consumption on a given target system, and computational complexity.
arXiv Detail & Related papers (2023-05-23T14:31:52Z) - Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time
Mobile Acceleration [71.80326738527734]
We propose a general, fine-grained structured pruning scheme and corresponding compiler optimizations.
We show that our pruning scheme mapping methods, together with the general fine-grained structured pruning scheme, outperform the state-of-the-art DNN optimization framework.
arXiv Detail & Related papers (2021-11-22T23:53:14Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - A novel Deep Neural Network architecture for non-linear system
identification [78.69776924618505]
We present a novel Deep Neural Network (DNN) architecture for non-linear system identification.
Inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function)
This architecture allows for automatic complexity selection based solely on available data.
arXiv Detail & Related papers (2021-06-06T10:06:07Z) - Automated Design Space Exploration for optimised Deployment of DNN on
Arm Cortex-A CPUs [13.628734116014819]
Deep learning on embedded devices has prompted the development of numerous methods to optimise the deployment of deep neural networks (DNN)
There is a lack of research on cross-level optimisation as the space of approaches becomes too large to test and obtain a globally optimised solution.
We present a set of results for state-of-the-art DNNs on a range of Arm Cortex-A CPU platforms achieving up to 4x improvement in performance and over 2x reduction in memory.
arXiv Detail & Related papers (2020-06-09T11:00:06Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.