A Novel Approach for Machine Learning-based Load Balancing in High-speed
Train System using Nested Cross Validation
- URL: http://arxiv.org/abs/2310.01034v1
- Date: Mon, 2 Oct 2023 09:24:10 GMT
- Title: A Novel Approach for Machine Learning-based Load Balancing in High-speed
Train System using Nested Cross Validation
- Authors: Ibrahim Yazici, and Emre Gures
- Abstract summary: Fifth-generation (5G) mobile communication networks have recently emerged in various fields, including highspeed trains.
We model system performance of a high-speed train system with a novel machine learning (ML) approach that is nested cross validation scheme.
- Score: 0.6138671548064356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fifth-generation (5G) mobile communication networks have recently emerged in
various fields, including highspeed trains. However, the dense deployment of 5G
millimeter wave (mmWave) base stations (BSs) and the high speed of moving
trains lead to frequent handovers (HOs), which can adversely affect the
Quality-of-Service (QoS) of mobile users. As a result, HO optimization and
resource allocation are essential considerations for managing mobility in
high-speed train systems. In this paper, we model system performance of a
high-speed train system with a novel machine learning (ML) approach that is
nested cross validation scheme that prevents information leakage from model
evaluation into the model parameter tuning, thereby avoiding overfitting and
resulting in better generalization error. To this end, we employ ML methods for
the high-speed train system scenario. Handover Margin (HOM) and Time-to-Trigger
(TTT) values are used as features, and several KPIs are used as outputs, and
several ML methods including Gradient Boosting Regression (GBR), Adaptive
Boosting (AdaBoost), CatBoost Regression (CBR), Artificial Neural Network
(ANN), Kernel Ridge Regression (KRR), Support Vector Regression (SVR), and
k-Nearest Neighbor Regression (KNNR) are employed for the problem. Finally,
performance comparisons of the cross validation schemes with the methods are
made in terms of mean absolute error (MAE) and mean square error (MSE) metrics
are made. As per obtained results, boosting methods, ABR, CBR, GBR, with nested
cross validation scheme superiorly outperforms conventional cross validation
scheme results with the same methods. On the other hand, SVR, KNRR, KRR, ANN
with the nested scheme produce promising results for prediction of some KPIs
with respect to their conventional scheme employment.
Related papers
- A Robust Machine Learning Approach for Path Loss Prediction in 5G
Networks with Nested Cross Validation [0.6138671548064356]
We utilize machine learning (ML) methods, which overcome conventional path loss prediction models, for path loss prediction in a 5G network system.
First, we acquire a dataset obtained through a comprehensive measurement campaign conducted in an urban macro-cell scenario located in Beijing, China.
We deploy Support Vector Regression (SVR), CatBoost Regression (CBR), eXtreme Gradient Boosting Regression (XGBR), Artificial Neural Network (ANN), and Random Forest (RF) methods to predict the path loss, and compare the prediction results in terms of Mean Absolute Error (MAE) and Mean Square Error (MSE)
arXiv Detail & Related papers (2023-10-02T09:21:58Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - Accurate Discharge Coefficient Prediction of Streamlined Weirs by
Coupling Linear Regression and Deep Convolutional Gated Recurrent Unit [2.4475596711637433]
The present study proposes data-driven modeling techniques, as an alternative to CFD simulation, to predict the discharge coefficient based on an experimental dataset.
It is found that the proposed three layer hierarchical DL algorithm consists of a convolutional layer coupled with two subsequent GRU levels, which is also hybridized with the LR method, leads to lower error metrics.
arXiv Detail & Related papers (2022-04-12T01:59:36Z) - Integrate Lattice-Free MMI into End-to-End Speech Recognition [87.01137882072322]
In automatic speech recognition (ASR) research, discriminative criteria have achieved superior performance in DNN-HMM systems.
With this motivation, the adoption of discriminative criteria is promising to boost the performance of end-to-end (E2E) ASR systems.
Previous works have introduced the minimum Bayesian risk (MBR, one of the discriminative criteria) into E2E ASR systems.
In this work, novel algorithms are proposed in this work to integrate another widely used discriminative criterion, lattice-free maximum mutual information (LF-MMI) into E2E
arXiv Detail & Related papers (2022-03-29T14:32:46Z) - Hybridization of Capsule and LSTM Networks for unsupervised anomaly
detection on multivariate data [0.0]
This paper introduces a novel NN architecture which hybridises the Long-Short-Term-Memory (LSTM) and Capsule Networks into a single network.
The proposed method uses an unsupervised learning technique to overcome the issues with finding large volumes of labelled training data.
arXiv Detail & Related papers (2022-02-11T10:33:53Z) - Machine Learning Methods for Spectral Efficiency Prediction in Massive
MIMO Systems [0.0]
We study several machine learning approaches to solve the problem of estimating the spectral efficiency (SE) value for a certain precoding scheme, preferably in the shortest possible time.
The best results in terms of mean average percentage error (MAPE) are obtained with gradient boosting over sorted features, while linear models demonstrate worse prediction quality.
We investigate the practical applicability of the proposed algorithms in a wide range of scenarios generated by the Quadriga simulator.
arXiv Detail & Related papers (2021-12-29T07:03:10Z) - CATRO: Channel Pruning via Class-Aware Trace Ratio Optimization [61.71504948770445]
We propose a novel channel pruning method via Class-Aware Trace Ratio Optimization (CATRO) to reduce the computational burden and accelerate the model inference.
We show that CATRO achieves higher accuracy with similar cost or lower cost with similar accuracy than other state-of-the-art channel pruning algorithms.
Because of its class-aware property, CATRO is suitable to prune efficient networks adaptively for various classification subtasks, enhancing handy deployment and usage of deep networks in real-world applications.
arXiv Detail & Related papers (2021-10-21T06:26:31Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Machine Learning for MU-MIMO Receive Processing in OFDM Systems [14.118477167150143]
We propose an ML-enhanced MU-MIMO receiver that builds on top of a conventional linear minimum mean squared error (LMMSE) architecture.
CNNs are used to compute an approximation of the second-order statistics of the channel estimation error.
A CNN-based demapper jointly processes a large number of frequency-division multiplexing symbols and subcarriers.
arXiv Detail & Related papers (2020-12-15T09:55:37Z) - Millimeter Wave Communications with an Intelligent Reflector:
Performance Optimization and Distributional Reinforcement Learning [119.97450366894718]
A novel framework is proposed to optimize the downlink multi-user communication of a millimeter wave base station.
A channel estimation approach is developed to measure the channel state information (CSI) in real-time.
A distributional reinforcement learning (DRL) approach is proposed to learn the optimal IR reflection and maximize the expectation of downlink capacity.
arXiv Detail & Related papers (2020-02-24T22:18:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.