Cloud Computing Energy Consumption Prediction Based on Kernel Extreme Learning Machine Algorithm Improved by Vector Weighted Average Algorithm
- URL: http://arxiv.org/abs/2503.04088v1
- Date: Thu, 06 Mar 2025 04:52:50 GMT
- Title: Cloud Computing Energy Consumption Prediction Based on Kernel Extreme Learning Machine Algorithm Improved by Vector Weighted Average Algorithm
- Authors: Yuqing Wang, Xiao Yang,
- Abstract summary: This study proposes a novel Vector weighted average algorithm (VWAA) with kernel extreme learning machine (KELM) to enhance energy consumption prediction in cloud computing environments.<n> Experimental results demonstrate the superior performance of VWAA-KELM: 94.7% of test set prediction errors fall within [0, 50] units, with only three cases exceeding 100 units.<n>The proposed hybrid framework has broader applications in Internet of Things (IoT) and edge computing, supporting real-time energy management and intelligent resource allocation.
- Score: 11.657154571216234
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rapid expansion of cloud computing infrastructure, energy consumption has become a critical challenge, driving the need for accurate and efficient prediction models. This study proposes a novel Vector Weighted Average Kernel Extreme Learning Machine (VWAA-KELM) model to enhance energy consumption prediction in cloud computing environments. By integrating a vector weighted average algorithm (VWAA) with kernel extreme learning machine (KELM), the proposed model dynamically adjusts feature weights and optimizes kernel functions, significantly improving prediction accuracy and generalization. Experimental results demonstrate the superior performance of VWAA-KELM: 94.7% of test set prediction errors fall within [0, 50] units, with only three cases exceeding 100 units, indicating strong stability. The model achieves a coefficient of determination (R2) of 0.987 in the training set (RMSE = 28.108, RPD = 8.872) and maintains excellent generalization with R2 = 0.973 in the test set (RMSE = 43.227, RPD = 6.202). Visual analysis confirms that predicted values closely align with actual energy consumption trends, avoiding overfitting while capturing nonlinear dependencies. A key innovation of this study is the introduction of adaptive feature weighting, allowing the model to dynamically assign importance to different input parameters, thereby enhancing high-dimensional data processing. This advancement provides a scalable and efficient approach for optimizing cloud data center energy consumption. Beyond cloud computing, the proposed hybrid framework has broader applications in Internet of Things (IoT) and edge computing, supporting real-time energy management and intelligent resource allocation.
Related papers
- Bayesian Optimization of a Lightweight and Accurate Neural Network for Aerodynamic Performance Prediction [0.0]
We propose a new approach to build efficient and accurate predictive models for aerodynamic performance prediction.
To clearly describe the interplay between design variables, hierarchical and categorical kernels are used in the BO formulation.
For the drag coefficient prediction task, the Mean Absolute Percentage Error (MAPE) of our optimized model drops from 0.1433% to 0.0163%.
Our model achieves a MAPE of 0.82% on a benchmark aircraft self-noise prediction problem, significantly outperforming existing models.
arXiv Detail & Related papers (2025-03-25T09:14:36Z) - Predictable Scale: Part I -- Optimal Hyperparameter Scaling Law in Large Language Model Pretraining [57.919336816077085]
We show that optimal learning rate follows a power-law relationship with both model parameters and data sizes, while optimal batch size scales primarily with data sizes.<n>These laws demonstrate remarkable robustness across variations in model sparsity, training data distribution, and model shape.<n>This work is the first work that unifies different model shapes and structures, such as Mixture-of-Experts models and dense transformers.
arXiv Detail & Related papers (2025-03-06T18:58:29Z) - Value-Based Deep RL Scales Predictably [100.21834069400023]
We show that value-based off-policy RL methods are predictable despite community lore regarding their pathological behavior.<n>We validate our approach using three algorithms: SAC, BRO, and PQL on DeepMind Control, OpenAI gym, and IsaacGym.
arXiv Detail & Related papers (2025-02-06T18:59:47Z) - Neural Conformal Control for Time Series Forecasting [54.96087475179419]
We introduce a neural network conformal prediction method for time series that enhances adaptivity in non-stationary environments.<n>Our approach acts as a neural controller designed to achieve desired target coverage, leveraging auxiliary multi-view data with neural network encoders.<n>We empirically demonstrate significant improvements in coverage and probabilistic accuracy, and find that our method is the only one that combines good calibration with consistency in prediction intervals.
arXiv Detail & Related papers (2024-12-24T03:56:25Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Regression prediction algorithm for energy consumption regression in cloud computing based on horned lizard algorithm optimised convolutional neural network-bidirectional gated recurrent unit [2.7959678888027906]
We find that power consumption has the highest degree of positive correlation with energy efficiency, while CPU usage has the highest degree of negative correlation with energy efficiency.
We introduce a random forest model and an optimisation model based on the horned lizard optimisation algorithm for testing.
The results show that the optimisation algorithm performs more accurately and reliably in predicting energy efficiency.
arXiv Detail & Related papers (2024-07-19T16:19:14Z) - Fast Cell Library Characterization for Design Technology Co-Optimization Based on Graph Neural Networks [0.1752969190744922]
Design technology co-optimization (DTCO) plays a critical role in achieving optimal power, performance, and area.
We propose a graph neural network (GNN)-based machine learning model for rapid and accurate cell library characterization.
arXiv Detail & Related papers (2023-12-20T06:10:27Z) - FedWOA: A Federated Learning Model that uses the Whale Optimization
Algorithm for Renewable Energy Prediction [0.0]
This paper introduces FedWOA, a novel federated learning model that aggregate global prediction models from the weights of local neural network models trained on prosumer energy data.
The evaluation results on prosumers energy data have shown that FedWOA can effectively enhance the accuracy of energy prediction models accuracy by 25% for MSE and 16% for MAE compared to FedAVG.
arXiv Detail & Related papers (2023-09-19T05:44:18Z) - Comparative Evaluation of Metaheuristic Algorithms for Hyperparameter
Selection in Short-Term Weather Forecasting [0.0]
This paper explores the application of metaheuristic algorithms, namely Genetic Algorithm (GA), Differential Evolution (DE) and Particle Swarm Optimization (PSO)
We evaluate their performance in weather forecasting based on metrics such as Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE)
arXiv Detail & Related papers (2023-09-05T22:13:35Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Highly Efficient Salient Object Detection with 100K Parameters [137.74898755102387]
We propose a flexible convolutional module, namely generalized OctConv (gOctConv), to efficiently utilize both in-stage and cross-stages multi-scale features.
We build an extremely light-weighted model, namely CSNet, which achieves comparable performance with about 0.2% (100k) of large models on popular object detection benchmarks.
arXiv Detail & Related papers (2020-03-12T07:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.