Dynamic Decision Tree Ensembles for Energy-Efficient Inference on IoT
Edge Nodes
- URL: http://arxiv.org/abs/2306.09789v1
- Date: Fri, 16 Jun 2023 11:59:18 GMT
- Title: Dynamic Decision Tree Ensembles for Energy-Efficient Inference on IoT
Edge Nodes
- Authors: Francesco Daghero, Alessio Burrello, Enrico Macii, Paolo Montuschi,
Massimo Poncino and Daniele Jahier Pagliari
- Abstract summary: Decision tree ensembles, such as Random Forests (RFs) and Gradient Boosting (GBTs) are particularly suited for this task, given their relatively low complexity.
This paper proposes the use of dynamic ensembles, that adjust the number of executed trees based both on a latency/energy target and on the complexity of the processed input.
We focus on deploying these algorithms on multi-core low-power IoT devices, designing a tool that automatically converts a Python ensemble into optimized C code.
- Score: 12.99136544903102
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the increasing popularity of Internet of Things (IoT) devices, there is
a growing need for energy-efficient Machine Learning (ML) models that can run
on constrained edge nodes. Decision tree ensembles, such as Random Forests
(RFs) and Gradient Boosting (GBTs), are particularly suited for this task,
given their relatively low complexity compared to other alternatives. However,
their inference time and energy costs are still significant for edge hardware.
Given that said costs grow linearly with the ensemble size, this paper proposes
the use of dynamic ensembles, that adjust the number of executed trees based
both on a latency/energy target and on the complexity of the processed input,
to trade-off computational cost and accuracy. We focus on deploying these
algorithms on multi-core low-power IoT devices, designing a tool that
automatically converts a Python ensemble into optimized C code, and exploring
several optimizations that account for the available parallelism and memory
hierarchy. We extensively benchmark both static and dynamic RFs and GBTs on
three state-of-the-art IoT-relevant datasets, using an 8-core ultra-lowpower
System-on-Chip (SoC), GAP8, as the target platform. Thanks to the proposed
early-stopping mechanisms, we achieve an energy reduction of up to 37.9% with
respect to static GBTs (8.82 uJ vs 14.20 uJ per inference) and 41.7% with
respect to static RFs (2.86 uJ vs 4.90 uJ per inference), without losing
accuracy compared to the static model.
Related papers
- FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - Efficient Deep Learning Models for Privacy-preserving People Counting on
Low-resolution Infrared Arrays [11.363207467478134]
Infrared (IR) array sensors offer a low-cost, energy-efficient, and privacy-preserving solution for people counting.
Previous work has shown that Deep Learning (DL) can yield superior performance on this task.
We compare 6 different DL architectures on a novel dataset composed of IR images collected from a commercial 8x8 array.
arXiv Detail & Related papers (2023-04-12T15:29:28Z) - Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
Heterogeneous Memory Architectures [68.91874045918112]
adapter-ALBERT is an efficient model optimization for maximal data reuse across different tasks.
We demonstrate the advantage of mapping the model to a heterogeneous on-chip memory architecture by performing simulations on a validated NLP edge accelerator.
arXiv Detail & Related papers (2023-03-25T14:40:59Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - 8-bit Optimizers via Block-wise Quantization [57.25800395197516]
Statefuls maintain statistics over time, e.g., the exponentially smoothed sum (SGD with momentum) or squared sum (Adam) of past values.
This state can be used to accelerate optimization compared to plain gradient descent but uses memory that might otherwise be allocated to model parameters.
In this paper, we develop first gradients that use 8-bit statistics while maintaining the performance levels of using 32-bit gradient states.
arXiv Detail & Related papers (2021-10-06T15:43:20Z) - DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and
Transformers [105.74546828182834]
We show a hardware-efficient dynamic inference regime, named dynamic weight slicing, which adaptively slice a part of network parameters for inputs with diverse difficulty levels.
We present dynamic slimmable network (DS-Net) and dynamic slice-able network (DS-Net++) by input-dependently adjusting filter numbers of CNNs and multiple dimensions in both CNNs and transformers.
arXiv Detail & Related papers (2021-09-21T09:57:21Z) - EdgeBERT: Sentence-Level Energy Optimizations for Latency-Aware
Multi-Task NLP Inference [82.1584439276834]
Transformer-based language models such as BERT provide significant accuracy improvement for a multitude of natural language processing (NLP) tasks.
We present EdgeBERT, an in-depth algorithm- hardware co-design for latency-aware energy optimization for multi-task NLP.
arXiv Detail & Related papers (2020-11-28T19:21:47Z) - Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration [14.958793135751149]
Convolutional neural network (CNN) inference on mobile devices demands efficient hardware acceleration of low-precision (INT8) general matrix multiplication (GEMM)
Exploiting data sparsity is a common approach to further accelerate GEMM for CNN inference, and in particular, structural sparsity has the advantages of predictable load balancing and very low index overhead.
We address a key architectural challenge with structural sparsity: how to provide support for a range of sparsity levels while maintaining high utilization of the hardware.
arXiv Detail & Related papers (2020-09-04T20:17:42Z) - Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet
Implementation for Edge Motor-Imagery Brain--Machine Interfaces [16.381467082472515]
Motor-Imagery Brain--Machine Interfaces (MI-BMIs)promise direct and accessible communication between human brains and machines.
Deep learning models have emerged for classifying EEG signals.
These models often exceed the limitations of edge devices due to their memory and computational requirements.
arXiv Detail & Related papers (2020-04-24T12:29:03Z) - PERMDNN: Efficient Compressed DNN Architecture with Permuted Diagonal
Matrices [35.90103072918056]
Deep neural network (DNN) has emerged as the most important and popular artificial intelligent (AI) technique.
The growth of model size poses a key energy efficiency challenge for the underlying computing platform.
This paper proposes PermDNN, a novel approach to generate and execute hardware-friendly structured sparse DNN models.
arXiv Detail & Related papers (2020-04-23T02:26:40Z) - Pre-defined Sparsity for Low-Complexity Convolutional Neural Networks [9.409651543514615]
This work introduces convolutional layers with pre-defined sparse 2D kernels that have support sets that repeat periodically within and across filters.
Due to the efficient storage of our periodic sparse kernels, the parameter savings can translate into considerable improvements in energy efficiency.
arXiv Detail & Related papers (2020-01-29T07:10:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.