Enhancing Wind Speed and Wind Power Forecasting Using Shape-Wise Feature
Engineering: A Novel Approach for Improved Accuracy and Robustness
- URL: http://arxiv.org/abs/2401.08233v1
- Date: Tue, 16 Jan 2024 09:34:17 GMT
- Title: Enhancing Wind Speed and Wind Power Forecasting Using Shape-Wise Feature
Engineering: A Novel Approach for Improved Accuracy and Robustness
- Authors: Mulomba Mukendi Christian, Yun Seon Kim, Hyebong Choi, Jaeyoung Lee,
SongHee You
- Abstract summary: This study explores a novel feature engineering approach for predicting wind speed and power.
The results reveal substantial enhancements in model resilience against noise resulting from step increases in data.
The approach could achieve an impressive 83% accuracy in predicting unseen data up to the 24th steps.
- Score: 6.0447555473286885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate prediction of wind speed and power is vital for enhancing the
efficiency of wind energy systems. Numerous solutions have been implemented to
date, demonstrating their potential to improve forecasting. Among these, deep
learning is perceived as a revolutionary approach in the field. However,
despite their effectiveness, the noise present in the collected data remains a
significant challenge. This noise has the potential to diminish the performance
of these algorithms, leading to inaccurate predictions. In response to this,
this study explores a novel feature engineering approach. This approach
involves altering the data input shape in both Convolutional Neural
Network-Long Short-Term Memory (CNN-LSTM) and Autoregressive models for various
forecasting horizons. The results reveal substantial enhancements in model
resilience against noise resulting from step increases in data. The approach
could achieve an impressive 83% accuracy in predicting unseen data up to the
24th steps. Furthermore, this method consistently provides high accuracy for
short, mid, and long-term forecasts, outperforming the performance of
individual models. These findings pave the way for further research on noise
reduction strategies at different forecasting horizons through shape-wise
feature engineering.
Related papers
- Joint Hypergraph Rewiring and Memory-Augmented Forecasting Techniques in Digital Twin Technology [2.368662284133926]
Digital Twin technology creates virtual replicas of physical objects, processes, or systems by replicating their properties, data, and behaviors.
Digital Twin technology has leveraged Graph forecasting techniques in large-scale complex sensor networks to enable accurate forecasting and simulation of diverse scenarios.
We introduce a hybrid architecture that enhances the hypergraph representation learning backbone by incorporating fast adaptation to new patterns and memory-based retrieval of past knowledge.
arXiv Detail & Related papers (2024-08-22T14:08:45Z) - Forecast-PEFT: Parameter-Efficient Fine-Tuning for Pre-trained Motion Forecasting Models [68.23649978697027]
Forecast-PEFT is a fine-tuning strategy that freezes the majority of the model's parameters, focusing adjustments on newly introduced prompts and adapters.
Our experiments show that Forecast-PEFT outperforms traditional full fine-tuning methods in motion prediction tasks.
Forecast-FT further improves prediction performance, evidencing up to a 9.6% enhancement over conventional baseline methods.
arXiv Detail & Related papers (2024-07-28T19:18:59Z) - Learning Long-Horizon Predictions for Quadrotor Dynamics [48.08477275522024]
We study the key design choices for efficiently learning long-horizon prediction dynamics for quadrotors.
We show that sequential modeling techniques showcase their advantage in minimizing compounding errors compared to other types of solutions.
We propose a novel decoupled dynamics learning approach, which further simplifies the learning process while also enhancing the approach modularity.
arXiv Detail & Related papers (2024-07-17T19:06:47Z) - Bayesian Deep Learning for Remaining Useful Life Estimation via Stein
Variational Gradient Descent [14.784809634505903]
We show that Bayesian deep learning models trained via Stein variational gradient descent consistently outperform with respect to convergence speed and predictive performance.
We propose a method to enhance performance based on the uncertainty information provided by the Bayesian models.
arXiv Detail & Related papers (2024-02-02T02:21:06Z) - Koopman Invertible Autoencoder: Leveraging Forward and Backward Dynamics
for Temporal Modeling [13.38194491846739]
We propose a novel machine learning model based on Koopman operator theory, which we call Koopman Invertible Autoencoders (KIA)
KIA captures the inherent characteristic of the system by modeling both forward and backward dynamics in the infinite-dimensional Hilbert space.
This enables us to efficiently learn low-dimensional representations, resulting in more accurate predictions of long-term system behavior.
arXiv Detail & Related papers (2023-09-19T03:42:55Z) - Short-Term Load Forecasting Using A Particle-Swarm Optimized Multi-Head
Attention-Augmented CNN-LSTM Network [0.0]
Short-term load forecasting is of paramount importance in the efficient operation and planning of power systems.
Recent strides in deep learning have shown promise in addressing this challenge.
I propose a novel solution that surmounts these obstacles.
arXiv Detail & Related papers (2023-09-07T13:06:52Z) - Adaptive Fake Audio Detection with Low-Rank Model Squeezing [50.7916414913962]
Traditional approaches, such as finetuning, are computationally intensive and pose a risk of impairing the acquired knowledge of known fake audio types.
We introduce the concept of training low-rank adaptation matrices tailored specifically to the newly emerging fake audio types.
Our approach offers several advantages, including reduced storage memory requirements and lower equal error rates.
arXiv Detail & Related papers (2023-06-08T06:06:42Z) - Beyond S-curves: Recurrent Neural Networks for Technology Forecasting [60.82125150951035]
We develop an autencoder approach that employs recent advances in machine learning and time series forecasting.
S-curves forecasts largely exhibit a mean average percentage error (MAPE) comparable to a simple ARIMA baseline.
Our autoencoder approach improves the MAPE by 13.5% on average over the second-best result.
arXiv Detail & Related papers (2022-11-28T14:16:22Z) - DeepVol: Volatility Forecasting from High-Frequency Data with Dilated Causal Convolutions [53.37679435230207]
We propose DeepVol, a model based on Dilated Causal Convolutions that uses high-frequency data to forecast day-ahead volatility.
Our empirical results suggest that the proposed deep learning-based approach effectively learns global features from high-frequency data.
arXiv Detail & Related papers (2022-09-23T16:13:47Z) - N-HiTS: Neural Hierarchical Interpolation for Time Series Forecasting [17.53378788483556]
Two common challenges afflicting long-horizon forecasting are the volatility of the predictions and their computational complexity.
We introduce N-HiTS, a model which addresses both challenges by incorporating novel hierarchical and multi-rate data sampling techniques.
We conduct an empirical evaluation demonstrating the advantages of N-HiTS over the state-of-the-art long-horizon forecasting methods.
arXiv Detail & Related papers (2022-01-30T17:52:19Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.