Network Calculus with Flow Prolongation -- A Feedforward FIFO Analysis
enabled by ML
- URL: http://arxiv.org/abs/2202.03004v1
- Date: Mon, 7 Feb 2022 08:46:47 GMT
- Title: Network Calculus with Flow Prolongation -- A Feedforward FIFO Analysis
enabled by ML
- Authors: Fabien Geyer and Alexander Scheffler and Steffen Bondorf
- Abstract summary: Flow Prolongation (FP) has been shown to improve delay bound accuracy significantly.
We introduce DeepFP, an approach to make FP scale by predicting prolongations using machine learning.
DeepFP reduces delay bounds by 12.1% on average at negligible additional computational cost.
- Score: 73.11023209243326
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The derivation of upper bounds on data flows' worst-case traversal times is
an important task in many application areas. For accurate bounds, model
simplifications should be avoided even in large networks. Network Calculus (NC)
provides a modeling framework and different analyses for delay bounding. We
investigate the analysis of feedforward networks where all queues implement
First-In First-Out (FIFO) service. Correctly considering the effect of data
flows onto each other under FIFO is already a challenging task. Yet, the
fastest available NC FIFO analysis suffers from limitations resulting in
unnecessarily loose bounds. A feature called Flow Prolongation (FP) has been
shown to improve delay bound accuracy significantly. Unfortunately, FP needs to
be executed within the NC FIFO analysis very often and each time it creates an
exponentially growing set of alternative networks with prolongations. FP
therefore does not scale and has been out of reach for the exhaustive analysis
of large networks. We introduce DeepFP, an approach to make FP scale by
predicting prolongations using machine learning. In our evaluation, we show
that DeepFP can improve results in FIFO networks considerably. Compared to the
standard NC FIFO analysis, DeepFP reduces delay bounds by 12.1% on average at
negligible additional computational cost.
Related papers
- ATHEENA: A Toolflow for Hardware Early-Exit Network Automation [11.623574576259859]
A toolflow for Hardware Early-Exit Network Automation (ATHEENA)
A toolflow that leverages the probability of samples exiting early from such networks to scale the resources allocated to different sections of the network.
arXiv Detail & Related papers (2023-04-17T16:06:58Z) - Variational Inference on the Final-Layer Output of Neural Networks [3.146069168382982]
This paper proposes to combine the advantages of both approaches by performing Variational Inference in the Final layer Output space (VIFO)
We use neural networks to learn the mean and the variance of the probabilistic output.
Experiments show that VIFO provides a good tradeoff in terms of run time and uncertainty quantification, especially for out of distribution data.
arXiv Detail & Related papers (2023-02-05T16:19:01Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Receptive Field-based Segmentation for Distributed CNN Inference
Acceleration in Collaborative Edge Computing [93.67044879636093]
We study inference acceleration using distributed convolutional neural networks (CNNs) in collaborative edge computing network.
We propose a novel collaborative edge computing using fused-layer parallelization to partition a CNN model into multiple blocks of convolutional layers.
arXiv Detail & Related papers (2022-07-22T18:38:11Z) - OFedQIT: Communication-Efficient Online Federated Learning via
Quantization and Intermittent Transmission [7.6058140480517356]
Online federated learning (OFL) is a promising framework to collaboratively learn a sequence of non-linear functions (or models) from distributed streaming data.
We propose a communication-efficient OFL algorithm (named OFedQIT) by means of a quantization and an intermittent transmission.
Our analysis reveals that OFedQIT successfully addresses the drawbacks of OFedAvg while maintaining superior learning accuracy.
arXiv Detail & Related papers (2022-05-13T07:46:43Z) - CPNet: Cross-Parallel Network for Efficient Anomaly Detection [20.84973451610082]
Cross-Parallel Network (CPNet) for efficient anomaly detection is proposed here to minimize computations without performance drops.
An inter-network shift module is incorporated to capture temporal relationships among sequential frames to enable more accurate future predictions.
arXiv Detail & Related papers (2021-08-10T05:29:37Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Fast Fourier Intrinsic Network [41.95712986029093]
We propose the Fast Fourier Intrinsic Network, FFI-Net, that operates in the spectral domain.
Weights in FFI-Net are optimized in the spectral domain, allowing faster convergence to a lower error.
It achieves state-of-the-art performance on MPI-Sintel, MIT Intrinsic, and IIW datasets.
arXiv Detail & Related papers (2020-11-09T18:14:39Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.