Residual-based Adaptive Huber Loss (RAHL) -- Design of an improved Huber loss for CQI prediction in 5G networks
- URL: http://arxiv.org/abs/2408.14718v1
- Date: Tue, 27 Aug 2024 00:58:32 GMT
- Title: Residual-based Adaptive Huber Loss (RAHL) -- Design of an improved Huber loss for CQI prediction in 5G networks
- Authors: Mina Kaviani, Jurandy Almeida, Fabio L. Verdi,
- Abstract summary: We propose a novel loss function, named Residual-based Adaptive Huber Loss (RAHL)
RAHL balances robustness against outliers while preserving inlier data precision.
Results affirm the superiority of RAHL, offering a promising avenue for enhanced CQI prediction in 5G networks.
- Score: 0.7499722271664144
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The Channel Quality Indicator (CQI) plays a pivotal role in 5G networks, optimizing infrastructure dynamically to ensure high Quality of Service (QoS). Recent research has focused on improving CQI estimation in 5G networks using machine learning. In this field, the selection of the proper loss function is critical for training an accurate model. Two commonly used loss functions are Mean Squared Error (MSE) and Mean Absolute Error (MAE). Roughly speaking, MSE put more weight on outliers, MAE on the majority. Here, we argue that the Huber loss function is more suitable for CQI prediction, since it combines the benefits of both MSE and MAE. To achieve this, the Huber loss transitions smoothly between MSE and MAE, controlled by a user-defined hyperparameter called delta. However, finding the right balance between sensitivity to small errors (MAE) and robustness to outliers (MSE) by manually choosing the optimal delta is challenging. To address this issue, we propose a novel loss function, named Residual-based Adaptive Huber Loss (RAHL). In RAHL, a learnable residual is added to the delta, enabling the model to adapt based on the distribution of errors in the data. Our approach effectively balances model robustness against outliers while preserving inlier data precision. The widely recognized Long Short-Term Memory (LSTM) model is employed in conjunction with RAHL, showcasing significantly improved results compared to the aforementioned loss functions. The obtained results affirm the superiority of RAHL, offering a promising avenue for enhanced CQI prediction in 5G networks.
Related papers
- Robust Fine-tuning of Zero-shot Models via Variance Reduction [56.360865951192324]
When fine-tuning zero-shot models, our desideratum is for the fine-tuned model to excel in both in-distribution (ID) and out-of-distribution (OOD)
We propose a sample-wise ensembling technique that can simultaneously attain the best ID and OOD accuracy without the trade-offs.
arXiv Detail & Related papers (2024-11-11T13:13:39Z) - Stabilizing Extreme Q-learning by Maclaurin Expansion [51.041889588036895]
Extreme Q-learning (XQL) employs a loss function based on the assumption that Bellman error follows a Gumbel distribution.
It has demonstrated strong performance in both offline and online reinforcement learning settings.
We propose Maclaurin Expanded Extreme Q-learning to enhance stability.
arXiv Detail & Related papers (2024-06-07T12:43:17Z) - A Robust Machine Learning Approach for Path Loss Prediction in 5G
Networks with Nested Cross Validation [0.6138671548064356]
We utilize machine learning (ML) methods, which overcome conventional path loss prediction models, for path loss prediction in a 5G network system.
First, we acquire a dataset obtained through a comprehensive measurement campaign conducted in an urban macro-cell scenario located in Beijing, China.
We deploy Support Vector Regression (SVR), CatBoost Regression (CBR), eXtreme Gradient Boosting Regression (XGBR), Artificial Neural Network (ANN), and Random Forest (RF) methods to predict the path loss, and compare the prediction results in terms of Mean Absolute Error (MAE) and Mean Square Error (MSE)
arXiv Detail & Related papers (2023-10-02T09:21:58Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Alternate Loss Functions for Classification and Robust Regression Can Improve the Accuracy of Artificial Neural Networks [6.452225158891343]
This paper shows that training speed and final accuracy of neural networks can significantly depend on the loss function used to train neural networks.
Two new classification loss functions that significantly improve performance on a wide variety of benchmark tasks are proposed.
arXiv Detail & Related papers (2023-03-17T12:52:06Z) - Cauchy Loss Function: Robustness Under Gaussian and Cauchy Noise [0.0]
In supervised machine learning, the choice of loss function implicitly assumes a particular noise distribution over the data.
The Cauchy loss function (CLF) assumes a Cauchy noise distribution, and is therefore potentially better suited for data with outliers.
CLF yielded results that were either comparable to or better than the results yielded by MSE, with a few notable exceptions.
arXiv Detail & Related papers (2023-02-14T18:34:44Z) - VisFIS: Visual Feature Importance Supervision with
Right-for-the-Right-Reason Objectives [84.48039784446166]
We show that model FI supervision can meaningfully improve VQA model accuracy as well as performance on several Right-for-the-Right-Reason metrics.
Our best performing method, Visual Feature Importance Supervision (VisFIS), outperforms strong baselines on benchmark VQA datasets.
Predictions are more accurate when explanations are plausible and faithful, and not when they are plausible but not faithful.
arXiv Detail & Related papers (2022-06-22T17:02:01Z) - Improving evidential deep learning via multi-task learning [1.8275108630751844]
The objective is to improve the prediction accuracy of the ENet while maintaining its efficient uncertainty estimation.
A multi-task learning framework, referred to as MT-ENet, is proposed to accomplish this aim.
The MT-ENet enhances the predictive accuracy of the ENet without losing uncertainty estimation capability on the synthetic dataset and real-world benchmarks.
arXiv Detail & Related papers (2021-12-17T07:56:20Z) - Norm-in-Norm Loss with Faster Convergence and Better Performance for
Image Quality Assessment [20.288424566444224]
We explore normalization in the design of loss functions for image quality assessment (IQA) models.
The resulting "Norm-in-Norm'' loss encourages the IQA model to make linear predictions with respect to subjective quality scores.
Experiments on two relevant datasets show that, compared to MAE or MSE loss, the new loss enables the IQA model to converge about 10 times faster.
arXiv Detail & Related papers (2020-08-10T04:01:21Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.