Improving the Accuracy and Interpretability of Neural Networks for Wind
Power Forecasting
- URL: http://arxiv.org/abs/2312.15741v1
- Date: Mon, 25 Dec 2023 14:29:09 GMT
- Title: Improving the Accuracy and Interpretability of Neural Networks for Wind
Power Forecasting
- Authors: Wenlong Liao, Fernando Porte-Agel, Jiannong Fang, Birgitte Bak-Jensen,
Zhe Yang, Gonghao Zhang
- Abstract summary: This paper firstly proposes simple but effective triple optimization strategies (TriOpts) to accelerate the training process.
Then, permutation feature importance (PFI) and local interpretable model-agnostic explanation (LIME) techniques are presented to interpret forecasted behaviors of DNNs.
- Score: 42.640766130080415
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep neural networks (DNNs) are receiving increasing attention in wind power
forecasting due to their ability to effectively capture complex patterns in
wind data. However, their forecasted errors are severely limited by the local
optimal weight issue in optimization algorithms, and their forecasted behavior
also lacks interpretability. To address these two challenges, this paper
firstly proposes simple but effective triple optimization strategies (TriOpts)
to accelerate the training process and improve the model performance of DNNs in
wind power forecasting. Then, permutation feature importance (PFI) and local
interpretable model-agnostic explanation (LIME) techniques are innovatively
presented to interpret forecasted behaviors of DNNs, from global and instance
perspectives. Simulation results show that the proposed TriOpts not only
drastically improve the model generalization of DNNs for both the deterministic
and probabilistic wind power forecasting, but also accelerate the training
process. Besides, the proposed PFI and LIME techniques can accurately estimate
the contribution of each feature to wind power forecasting, which helps to
construct feature engineering and understand how to obtain forecasted values
for a given sample.
Related papers
- Forecast-PEFT: Parameter-Efficient Fine-Tuning for Pre-trained Motion Forecasting Models [68.23649978697027]
Forecast-PEFT is a fine-tuning strategy that freezes the majority of the model's parameters, focusing adjustments on newly introduced prompts and adapters.
Our experiments show that Forecast-PEFT outperforms traditional full fine-tuning methods in motion prediction tasks.
Forecast-FT further improves prediction performance, evidencing up to a 9.6% enhancement over conventional baseline methods.
arXiv Detail & Related papers (2024-07-28T19:18:59Z) - Enabling Uncertainty Estimation in Iterative Neural Networks [49.56171792062104]
We develop an approach to uncertainty estimation that provides state-of-the-art estimates at a much lower computational cost than techniques like Ensembles.
We demonstrate its practical value by embedding it in two application domains: road detection in aerial images and the estimation of aerodynamic properties of 2D and 3D shapes.
arXiv Detail & Related papers (2024-03-25T13:06:31Z) - Hallmarks of Optimization Trajectories in Neural Networks: Directional Exploration and Redundancy [75.15685966213832]
We analyze the rich directional structure of optimization trajectories represented by their pointwise parameters.
We show that training only scalar batchnorm parameters some while into training matches the performance of training the entire network.
arXiv Detail & Related papers (2024-03-12T07:32:47Z) - Enhancing Wind Speed and Wind Power Forecasting Using Shape-Wise Feature
Engineering: A Novel Approach for Improved Accuracy and Robustness [6.0447555473286885]
This study explores a novel feature engineering approach for predicting wind speed and power.
The results reveal substantial enhancements in model resilience against noise resulting from step increases in data.
The approach could achieve an impressive 83% accuracy in predicting unseen data up to the 24th steps.
arXiv Detail & Related papers (2024-01-16T09:34:17Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Renewable energy management in smart home environment via forecast
embedded scheduling based on Recurrent Trend Predictive Neural Network [0.0]
This paper proposes an advanced ML algorithm, called Recurrent Trend Predictive Neural Network based Forecast Embedded Scheduling (rTPNN-FES)
rTPNN-FES is a novel neural network architecture that simultaneously forecasts renewable energy generation and schedules household appliances.
By its embedded structure, rTPNN-FES eliminates the utilization of separate algorithms for forecasting and scheduling and generates a schedule that is robust against forecasting errors.
arXiv Detail & Related papers (2023-07-04T10:18:16Z) - Towards Understanding the Unreasonable Effectiveness of Learning AC-OPF
Solutions [31.388212637482365]
Optimal Power Flow (OPF) is a fundamental problem in power systems.
Recent research has proposed the use of Deep Neural Networks (DNNs) to find OPF approximations at vastly reduced runtimes.
This paper provides a step forward to address this knowledge gap.
arXiv Detail & Related papers (2021-11-22T13:04:31Z) - An Interpretable Probabilistic Model for Short-Term Solar Power
Forecasting Using Natural Gradient Boosting [0.0]
We propose a two stage probabilistic forecasting framework able to generate highly accurate, reliable, and sharp forecasts.
The framework offers full transparency on both the point forecasts and the prediction intervals (PIs)
To highlight the performance and the applicability of the proposed framework, real data from two PV parks located in Southern Germany are employed.
arXiv Detail & Related papers (2021-08-05T12:59:38Z) - Adaptive Inference through Early-Exit Networks: Design, Challenges and
Directions [80.78077900288868]
We decompose the design methodology of early-exit networks to its key components and survey the recent advances in each one of them.
We position early-exiting against other efficient inference solutions and provide our insights on the current challenges and most promising future directions for research in the field.
arXiv Detail & Related papers (2021-06-09T12:33:02Z) - Error-feedback stochastic modeling strategy for time series forecasting
with convolutional neural networks [11.162185201961174]
We propose a novel Error-feedback Modeling (ESM) strategy to construct a random Convolutional Network (ESM-CNN) Neural time series forecasting task.
The proposed ESM-CNN not only outperforms the state-of-art random neural networks, but also exhibits stronger predictive power and less computing overhead in comparison to trained state-of-art deep neural network models.
arXiv Detail & Related papers (2020-02-03T13:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.