Superiority of Simplicity: A Lightweight Model for Network Device
Workload Prediction
- URL: http://arxiv.org/abs/2007.03568v1
- Date: Tue, 7 Jul 2020 15:44:16 GMT
- Title: Superiority of Simplicity: A Lightweight Model for Network Device
Workload Prediction
- Authors: Alexander Acker, Thorsten Wittkopp, Sasho Nedelkoski, Jasmin
Bogatinovski, Odej Kao
- Abstract summary: We propose a lightweight solution for series prediction based on historic observations.
It consists of a heterogeneous ensemble method composed of two models - a neural network and a mean predictor.
It achieves an overall $R2$ score of 0.10 on the available FedCSIS 2020 challenge dataset.
- Score: 58.98112070128482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid growth and distribution of IT systems increases their complexity
and aggravates operation and maintenance. To sustain control over large sets of
hosts and the connecting networks, monitoring solutions are employed and
constantly enhanced. They collect diverse key performance indicators (KPIs)
(e.g. CPU utilization, allocated memory, etc.) and provide detailed information
about the system state. Storing such metrics over a period of time naturally
raises the motivation of predicting future KPI progress based on past
observations. Although, a variety of time series forecasting methods exist,
forecasting the progress of IT system KPIs is very hard. First, KPI types like
CPU utilization or allocated memory are very different and hard to be expressed
by the same model. Second, system components are interconnected and constantly
changing due to soft- or firmware updates and hardware modernization. Thus a
frequent model retraining or fine-tuning must be expected. Therefore, we
propose a lightweight solution for KPI series prediction based on historic
observations. It consists of a weighted heterogeneous ensemble method composed
of two models - a neural network and a mean predictor. As ensemble method a
weighted summation is used, whereby a heuristic is employed to set the weights.
The modelling approach is evaluated on the available FedCSIS 2020 challenge
dataset and achieves an overall $R^2$ score of 0.10 on the preliminary 10% test
data and 0.15 on the complete test data. We publish our code on the following
github repository: https://github.com/citlab/fed_challenge
Related papers
- Tackling Data Heterogeneity in Federated Time Series Forecasting [61.021413959988216]
Time series forecasting plays a critical role in various real-world applications, including energy consumption prediction, disease transmission monitoring, and weather forecasting.
Most existing methods rely on a centralized training paradigm, where large amounts of data are collected from distributed devices to a central cloud server.
We propose a novel framework, Fed-TREND, to address data heterogeneity by generating informative synthetic data as auxiliary knowledge carriers.
arXiv Detail & Related papers (2024-11-24T04:56:45Z) - A Bayesian Approach to Data Point Selection [24.98069363998565]
Data point selection (DPS) is becoming a critical topic in deep learning.
Existing approaches to DPS are predominantly based on a bi-level optimisation (BLO) formulation.
We propose a novel Bayesian approach to DPS.
arXiv Detail & Related papers (2024-11-06T09:04:13Z) - GCEPNet: Graph Convolution-Enhanced Expectation Propagation for Massive MIMO Detection [5.714553194279462]
We show that a real-valued system can be modeled as spectral signal convolution on graph, through which the correlation between unknown variables can be captured.
Based on such analysis, we propose graph convolution-enhanced expectation propagation (GCEPNet) with better generalization capacity.
arXiv Detail & Related papers (2024-04-23T10:13:39Z) - Stochastic Approximation Approach to Federated Machine Learning [0.0]
This paper examines Federated learning (FL) in a Approximation (SA) framework.
FL is a collaborative way to train neural network models across various participants or clients.
It is observed that the proposed algorithm is robust and gives more reliable estimates of the weights.
arXiv Detail & Related papers (2024-02-20T12:00:25Z) - A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Flag Aggregator: Scalable Distributed Training under Failures and
Augmented Losses using Convex Optimization [14.732408788010313]
ML applications increasingly rely on complex deep learning models and large datasets.
To scale computation and data, these models are inevitably trained in a distributed manner in clusters of nodes, and their updates are aggregated before being applied to the model.
With data augmentation added to these settings, there is a critical need for robust and efficient aggregation systems.
We show that our approach significantly enhances the robustness of state-of-the-art Byzantine resilient aggregators.
arXiv Detail & Related papers (2023-02-12T06:38:30Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - TELESTO: A Graph Neural Network Model for Anomaly Classification in
Cloud Services [77.454688257702]
Machine learning (ML) and artificial intelligence (AI) are applied on IT system operation and maintenance.
One direction aims at the recognition of re-occurring anomaly types to enable remediation automation.
We propose a method that is invariant to dimensionality changes of given data.
arXiv Detail & Related papers (2021-02-25T14:24:49Z) - Conditional Mutual information-based Contrastive Loss for Financial Time
Series Forecasting [12.0855096102517]
We present a representation learning framework for financial time series forecasting.
In this paper, we propose to first learn compact representations from time series data, then use the learned representations to train a simpler model for predicting time series movements.
arXiv Detail & Related papers (2020-02-18T15:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.