National-scale electricity peak load forecasting: Traditional, machine
learning, or hybrid model?
- URL: http://arxiv.org/abs/2107.06174v1
- Date: Wed, 30 Jun 2021 15:17:23 GMT
- Title: National-scale electricity peak load forecasting: Traditional, machine
learning, or hybrid model?
- Authors: Juyong Lee and Youngsang Cho
- Abstract summary: This study performs a comparative analysis to determine the most accurate peak load-forecasting model for Korea.
The results indicate that the hybrid models exhibit significant improvement over the SARIMAX model.
In the case of Korea's highest peak load in 2019, the predictive power of the LSTM model proved to be greater than that of the SARIMAX-LSTM model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the volatility of electricity demand increases owing to climate change and
electrification, the importance of accurate peak load forecasting is
increasing. Traditional peak load forecasting has been conducted through time
series-based models; however, recently, new models based on machine or deep
learning are being introduced. This study performs a comparative analysis to
determine the most accurate peak load-forecasting model for Korea, by comparing
the performance of time series, machine learning, and hybrid models. Seasonal
autoregressive integrated moving average with exogenous variables (SARIMAX) is
used for the time series model. Artificial neural network (ANN), support vector
regression (SVR), and long short-term memory (LSTM) are used for the machine
learning models. SARIMAX-ANN, SARIMAX-SVR, and SARIMAX-LSTM are used for the
hybrid models. The results indicate that the hybrid models exhibit significant
improvement over the SARIMAX model. The LSTM-based models outperformed the
others; the single and hybrid LSTM models did not exhibit a significant
performance difference. In the case of Korea's highest peak load in 2019, the
predictive power of the LSTM model proved to be greater than that of the
SARIMAX-LSTM model. The LSTM, SARIMAX-SVR, and SARIMAX-LSTM models outperformed
the current time series-based forecasting model used in Korea. Thus, Korea's
peak load-forecasting performance can be improved by including machine learning
or hybrid models.
Related papers
- EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - Generalization capabilities and robustness of hybrid machine learning models grounded in flow physics compared to purely deep learning models [2.8686437689115363]
This study investigates the generalization capabilities and robustness of purely deep learning (DL) models and hybrid models based on physical principles in fluid dynamics applications.
Three autoregressive models were compared: a convolutional autoencoder combined with a convolutional LSTM, a variational autoencoder (VAE) combined with a ConvLSTM and a hybrid model that combines proper decomposition (POD) with a LSTM (POD-DL)
While the VAE and ConvLSTM models accurately predicted laminar flow, the hybrid POD-DL model outperformed the others across both laminar and turbulent flow regimes.
arXiv Detail & Related papers (2024-04-27T12:43:02Z) - Introducing Hybrid Modeling with Time-series-Transformers: A Comparative
Study of Series and Parallel Approach in Batch Crystallization [0.0]
Most existing digital twins rely on data-driven black-box models, predominantly using deep neural recurrent, and convolutional neural networks (DNNs, RNNs, and CNNs) to capture the dynamics of chemical systems.
Recently, attention-based time-series transformers (TSTs) that leverage multi-headed attention mechanism and positional encoding have shown high predictive performance.
First-of-a-kind, TST-based hybrid framework has been developed for batch crystallization, demonstrating improved accuracy and interpretability compared to traditional black-box models.
arXiv Detail & Related papers (2023-07-25T15:19:51Z) - Short-term Prediction of Household Electricity Consumption Using
Customized LSTM and GRU Models [5.8010446129208155]
This paper proposes a customized GRU (Gated Recurrent Unit) and Long Short-Term Memory (LSTM) architecture to address this challenging problem.
The electricity consumption datasets were obtained from individual household smart meters.
arXiv Detail & Related papers (2022-12-16T23:42:57Z) - Your Autoregressive Generative Model Can be Better If You Treat It as an
Energy-Based One [83.5162421521224]
We propose a unique method termed E-ARM for training autoregressive generative models.
E-ARM takes advantage of a well-designed energy-based learning objective.
We show that E-ARM can be trained efficiently and is capable of alleviating the exposure bias problem.
arXiv Detail & Related papers (2022-06-26T10:58:41Z) - How robust are pre-trained models to distribution shift? [82.08946007821184]
We show how spurious correlations affect the performance of popular self-supervised learning (SSL) and auto-encoder based models (AE)
We develop a novel evaluation scheme with the linear head trained on out-of-distribution (OOD) data, to isolate the performance of the pre-trained models from a potential bias of the linear head used for evaluation.
arXiv Detail & Related papers (2022-06-17T16:18:28Z) - A Hybrid Model for Forecasting Short-Term Electricity Demand [59.372588316558826]
Currently the UK Electric market is guided by load (demand) forecasts published every thirty minutes by the regulator.
We present HYENA: a hybrid predictive model that combines feature engineering (selection of the candidate predictor features), mobile-window predictors and LSTM encoder-decoders.
arXiv Detail & Related papers (2022-05-20T22:13:25Z) - METRO: Efficient Denoising Pretraining of Large Scale Autoencoding
Language Models with Model Generated Signals [151.3601429216877]
We present an efficient method of pretraining large-scale autoencoding language models using training signals generated by an auxiliary model.
We propose a recipe, namely "Model generated dEnoising TRaining Objective" (METRO)
The resultant models, METRO-LM, consisting of up to 5.4 billion parameters, achieve new state-of-the-art on the GLUE, SuperGLUE, and SQuAD benchmarks.
arXiv Detail & Related papers (2022-04-13T21:39:15Z) - Learning and Dynamical Models for Sub-seasonal Climate Forecasting:
Comparison and Collaboration [20.52175766498954]
Sub-seasonal climate forecasting (SSF) is the prediction of key climate variables such as temperature and precipitation on the 2-week to 2-month time horizon.
Recent studies have shown the potential of machine learning (ML) models to advance SSF.
arXiv Detail & Related papers (2021-09-29T06:34:34Z) - A Hybrid Residual Dilated LSTM end Exponential Smoothing Model for
Mid-Term Electric Load Forecasting [1.1602089225841632]
The model combines exponential smoothing (ETS), advanced Long Short-Term Memory (LSTM) and ensembling.
A simulation study performed on the monthly electricity demand time series for 35 European countries confirmed the high performance of the proposed model.
arXiv Detail & Related papers (2020-03-29T10:53:50Z) - Hybrid modeling: Applications in real-time diagnosis [64.5040763067757]
We outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models.
We are using such models for real-time diagnosis applications.
arXiv Detail & Related papers (2020-03-04T00:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.