Forecasting the Short-Term Energy Consumption Using Random Forests and
Gradient Boosting
- URL: http://arxiv.org/abs/2207.11952v1
- Date: Mon, 25 Jul 2022 07:40:25 GMT
- Title: Forecasting the Short-Term Energy Consumption Using Random Forests and
Gradient Boosting
- Authors: Cristina Bianca Pop, Viorica Rozina Chifu, Corina Cordea, Emil Stefan
Chifu, Octav Barsan
- Abstract summary: This paper analyzes comparatively the performance of Random Forests and Gradient Boosting algorithms in the field of forecasting the energy consumption based on historical data.
The two algorithms are applied in order to forecast the energy consumption individually, and then combined together by using a Weighted Average Ensemble Method.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper analyzes comparatively the performance of Random Forests and
Gradient Boosting algorithms in the field of forecasting the energy consumption
based on historical data. The two algorithms are applied in order to forecast
the energy consumption individually, and then combined together by using a
Weighted Average Ensemble Method. The comparison among the achieved
experimental results proves that the Weighted Average Ensemble Method provides
more accurate results than each of the two algorithms applied alone.
Related papers
- Regression prediction algorithm for energy consumption regression in cloud computing based on horned lizard algorithm optimised convolutional neural network-bidirectional gated recurrent unit [2.7959678888027906]
We find that power consumption has the highest degree of positive correlation with energy efficiency, while CPU usage has the highest degree of negative correlation with energy efficiency.
We introduce a random forest model and an optimisation model based on the horned lizard optimisation algorithm for testing.
The results show that the optimisation algorithm performs more accurately and reliably in predicting energy efficiency.
arXiv Detail & Related papers (2024-07-19T16:19:14Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - A hybrid estimation of distribution algorithm for joint stratification
and sample allocation [0.0]
We propose a hybrid estimation of distribution algorithm (HEDA) to solve the joint stratification and sample allocation problem.
EDAs are black-box optimization algorithms which can be used to estimate, build and sample probability models.
arXiv Detail & Related papers (2022-01-09T21:27:16Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Adaptive Sampling for Heterogeneous Rank Aggregation from Noisy Pairwise
Comparisons [85.5955376526419]
In rank aggregation problems, users exhibit various accuracy levels when comparing pairs of items.
We propose an elimination-based active sampling strategy, which estimates the ranking of items via noisy pairwise comparisons.
We prove that our algorithm can return the true ranking of items with high probability.
arXiv Detail & Related papers (2021-10-08T13:51:55Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - WildWood: a new Random Forest algorithm [0.0]
WildWood is a new ensemble algorithm for supervised learning of Random Forest (RF) type.
WW uses bootstrap out-of-bag samples to compute out-of-bag scores.
WW is fast and competitive compared with other well-established ensemble methods.
arXiv Detail & Related papers (2021-09-16T14:36:56Z) - On the Convergence of Prior-Guided Zeroth-Order Optimization Algorithms [33.96864594479152]
We analyze the convergence of prior-guided ZO algorithms under a greedy descent framework with various gradient estimators.
We also present a new accelerated random search (ARS) algorithm that incorporates prior information, together with a convergence analysis.
arXiv Detail & Related papers (2021-07-21T14:39:40Z) - Estimating leverage scores via rank revealing methods and randomization [50.591267188664666]
We study algorithms for estimating the statistical leverage scores of rectangular dense or sparse matrices of arbitrary rank.
Our approach is based on combining rank revealing methods with compositions of dense and sparse randomized dimensionality reduction transforms.
arXiv Detail & Related papers (2021-05-23T19:21:55Z) - Matrix completion based on Gaussian belief propagation [5.685589351789462]
We develop a message-passing algorithm for noisy matrix completion problems based on matrix factorization.
We derive a memory-friendly version of the proposed algorithm by applying a perturbation treatment commonly used in the literature of approximate message passing.
Experiments on synthetic datasets show that while the proposed algorithm quantitatively exhibits almost the same performance under settings where the earlier algorithm is optimal, it is advantageous when the observed datasets are corrupted by non-Gaussian noise.
arXiv Detail & Related papers (2021-05-01T12:16:49Z) - A General Method for Robust Learning from Batches [56.59844655107251]
We consider a general framework of robust learning from batches, and determine the limits of both classification and distribution estimation over arbitrary, including continuous, domains.
We derive the first robust computationally-efficient learning algorithms for piecewise-interval classification, and for piecewise-polynomial, monotone, log-concave, and gaussian-mixture distribution estimation.
arXiv Detail & Related papers (2020-02-25T18:53:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.