ATHEENA: A Toolflow for Hardware Early-Exit Network Automation
- URL: http://arxiv.org/abs/2304.08400v1
- Date: Mon, 17 Apr 2023 16:06:58 GMT
- Title: ATHEENA: A Toolflow for Hardware Early-Exit Network Automation
- Authors: Benjamin Biggs, Christos-Savvas Bouganis, George A. Constantinides
- Abstract summary: A toolflow for Hardware Early-Exit Network Automation (ATHEENA)
A toolflow that leverages the probability of samples exiting early from such networks to scale the resources allocated to different sections of the network.
- Score: 11.623574576259859
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The continued need for improvements in accuracy, throughput, and efficiency
of Deep Neural Networks has resulted in a multitude of methods that make the
most of custom architectures on FPGAs. These include the creation of
hand-crafted networks and the use of quantization and pruning to reduce
extraneous network parameters. However, with the potential of static solutions
already well exploited, we propose to shift the focus to using the varying
difficulty of individual data samples to further improve efficiency and reduce
average compute for classification. Input-dependent computation allows for the
network to make runtime decisions to finish a task early if the result meets a
confidence threshold. Early-Exit network architectures have become an
increasingly popular way to implement such behaviour in software.
We create: A Toolflow for Hardware Early-Exit Network Automation (ATHEENA),
an automated FPGA toolflow that leverages the probability of samples exiting
early from such networks to scale the resources allocated to different sections
of the network. The toolflow uses the data-flow model of fpgaConvNet, extended
to support Early-Exit networks as well as Design Space Exploration to optimize
the generated streaming architecture hardware with the goal of increasing
throughput/reducing area while maintaining accuracy. Experimental results on
three different networks demonstrate a throughput increase of $2.00\times$ to
$2.78\times$ compared to an optimized baseline network implementation with no
early exits. Additionally, the toolflow can achieve a throughput matching the
same baseline with as low as $46\%$ of the resources the baseline requires.
Related papers
- A Generalization of Continuous Relaxation in Structured Pruning [0.3277163122167434]
Trends indicate that deeper and larger neural networks with an increasing number of parameters achieve higher accuracy than smaller neural networks.
We generalize structured pruning with algorithms for network augmentation, pruning, sub-network collapse and removal.
The resulting CNN executes efficiently on GPU hardware without computationally expensive sparse matrix operations.
arXiv Detail & Related papers (2023-08-28T14:19:13Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - Network Calculus with Flow Prolongation -- A Feedforward FIFO Analysis
enabled by ML [73.11023209243326]
Flow Prolongation (FP) has been shown to improve delay bound accuracy significantly.
We introduce DeepFP, an approach to make FP scale by predicting prolongations using machine learning.
DeepFP reduces delay bounds by 12.1% on average at negligible additional computational cost.
arXiv Detail & Related papers (2022-02-07T08:46:47Z) - perf4sight: A toolflow to model CNN training performance on Edge GPUs [16.61258138725983]
This work proposes perf4sight, an automated methodology for developing accurate models that predict CNN training memory footprint and latency.
With PyTorch as the framework and NVIDIA Jetson TX2 as the target device, the developed models predict training memory footprint and latency with 95% and 91% accuracy respectively.
arXiv Detail & Related papers (2021-08-12T07:55:37Z) - Multi-Exit Semantic Segmentation Networks [78.44441236864057]
We propose a framework for converting state-of-the-art segmentation models to MESS networks.
specially trained CNNs that employ parametrised early exits along their depth to save during inference on easier samples.
We co-optimise the number, placement and architecture of the attached segmentation heads, along with the exit policy, to adapt to the device capabilities and application-specific requirements.
arXiv Detail & Related papers (2021-06-07T11:37:03Z) - ItNet: iterative neural networks with small graphs for accurate and
efficient anytime prediction [1.52292571922932]
In this study, we introduce a class of network models that have a small memory footprint in terms of their computational graphs.
We show state-of-the-art results for semantic segmentation on the CamVid and Cityscapes datasets.
arXiv Detail & Related papers (2021-01-21T15:56:29Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive
Meta-Pruning [83.59005356327103]
A common limitation of most existing pruning techniques is that they require pre-training of the network at least once before pruning.
We propose STAMP, which task-adaptively prunes a network pretrained on a large reference dataset by generating a pruning mask on it as a function of the target dataset.
We validate STAMP against recent advanced pruning methods on benchmark datasets.
arXiv Detail & Related papers (2020-06-22T10:57:43Z) - Dataflow Aware Mapping of Convolutional Neural Networks Onto Many-Core
Platforms With Network-on-Chip Interconnect [0.0764671395172401]
Machine intelligence, especially using convolutional neural networks (CNNs), has become a large area of research over the past years.
Many-core platforms consisting of several homogeneous cores can alleviate limitations with regard to physical implementation at the expense of an increased dataflow mapping effort.
This work presents an automated mapping strategy starting at the single-core level with different optimization targets for minimal runtime and minimal off-chip memory accesses.
The strategy is then extended towards a suitable many-core mapping scheme and evaluated using a scalable system-level simulation with a network-on-chip interconnect.
arXiv Detail & Related papers (2020-06-18T17:13:18Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.