On the Prediction of Wi-Fi Performance through Deep Learning
- URL: http://arxiv.org/abs/2512.00211v1
- Date: Fri, 28 Nov 2025 21:22:07 GMT
- Title: On the Prediction of Wi-Fi Performance through Deep Learning
- Authors: Gabriele Formis, Amanda Ericson, Stefan Forsstrom, Kyi Thar, Gianluca Cena, Stefano Scanzio,
- Abstract summary: This contribution focuses on the prediction of the Frame Delivery Ratio (FDR), a key metric that represents the percentage of successful transmissions.<n>The analysis focuses on two models of deep learning: a Convolutional Neural Network (CNN) and a Long Short-Term Memory network (LSTM)<n>Preliminary results show that both models are able to predict the evolution of the FDR with good accuracy, even from minimal information.
- Score: 0.11726720776908521
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensuring reliable and predictable communications is one of the main goals in modern industrial systems that rely on Wi-Fi networks, especially in scenarios where continuity of operation and low latency are required. In these contexts, the ability to predict changes in wireless channel quality can enable adaptive strategies and significantly improve system robustness. This contribution focuses on the prediction of the Frame Delivery Ratio (FDR), a key metric that represents the percentage of successful transmissions, starting from time sequences of binary outcomes (success/failure) collected in a real scenario. The analysis focuses on two models of deep learning: a Convolutional Neural Network (CNN) and a Long Short-Term Memory network (LSTM), both selected for their ability to predict the outcome of time sequences. Models are compared in terms of prediction accuracy and computational complexity, with the aim of evaluating their applicability to systems with limited resources. Preliminary results show that both models are able to predict the evolution of the FDR with good accuracy, even from minimal information (a single binary sequence). In particular, CNN shows a significantly lower inference latency, with a marginal loss in accuracy compared to LSTM.
Related papers
- Human-Centered Explainable AI for Security Enhancement: A Deep Intrusion Detection Framework [0.0]
This paper presented a novel IDS framework that integrated Explainable Artificial Intelligence (XAI) to enhance transparency in deep learning models.<n>The framework was evaluated experimentally using the benchmark dataset NSL-KDD, demonstrating superior performance compared to traditional IDS and black-box deep learning models.
arXiv Detail & Related papers (2026-02-04T20:33:27Z) - A Self-organizing Interval Type-2 Fuzzy Neural Network for Multi-Step Time Series Prediction [9.546043411729206]
Interval type 2 fuzzy neural network (IT2FNN) has shown exceptional performance in uncertainty modelling for single-step prediction tasks.<n>This paper proposes a new selforganizing interval type-2 fuzzy neural network with multiple outputs (SOIT2FNN-MO)<n> Experimental results on chaotic and microgrid prediction problems demonstrate that SOIT2FNN-MO outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-07-10T19:35:44Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Dual Accuracy-Quality-Driven Neural Network for Prediction Interval Generation [0.0]
We present a method to learn prediction intervals for regression-based neural networks automatically.
Our main contribution is the design of a novel loss function for the PI-generation network.
Experiments using a synthetic dataset, eight benchmark datasets, and a real-world crop yield prediction dataset showed that our method was able to maintain a nominal probability coverage.
arXiv Detail & Related papers (2022-12-13T05:03:16Z) - A Learning Convolutional Neural Network Approach for Network Robustness
Prediction [13.742495880357493]
Network robustness is critical for various societal and industrial networks again malicious attacks.
In this paper, an improved method for network robustness prediction is developed based on learning feature representation using convolutional neural network (LFR-CNN)
In this scheme, higher-dimensional network data are compressed to lower-dimensional representations, and then passed to a CNN to perform robustness prediction.
arXiv Detail & Related papers (2022-03-20T13:45:55Z) - Feature Analysis for ML-based IIoT Intrusion Detection [0.0]
Powerful Machine Learning models have been adopted to implement Network Intrusion Detection Systems (NIDSs)
It is important to select the right set of data features, which maximise the detection accuracy as well as computational efficiency.
This paper provides an extensive analysis of the optimal feature sets in terms of the importance and predictive power of network attacks.
arXiv Detail & Related papers (2021-08-29T02:19:37Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Lost in Pruning: The Effects of Pruning Neural Networks beyond Test
Accuracy [42.15969584135412]
Neural network pruning is a popular technique used to reduce the inference costs of modern networks.
We evaluate whether the use of test accuracy alone in the terminating condition is sufficient to ensure that the resulting model performs well.
We find that pruned networks effectively approximate the unpruned model, however, the prune ratio at which pruned networks achieve commensurate performance varies significantly across tasks.
arXiv Detail & Related papers (2021-03-04T13:22:16Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - Multi-Sample Online Learning for Probabilistic Spiking Neural Networks [43.8805663900608]
Spiking Neural Networks (SNNs) capture some of the efficiency of biological brains for inference and learning.
This paper introduces an online learning rule based on generalized expectation-maximization (GEM)
Experimental results on structured output memorization and classification on a standard neuromorphic data set demonstrate significant improvements in terms of log-likelihood, accuracy, and calibration.
arXiv Detail & Related papers (2020-07-23T10:03:58Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.