A VAE-Based Bayesian Bidirectional LSTM for Renewable Energy Forecasting
- URL: http://arxiv.org/abs/2103.12969v1
- Date: Wed, 24 Mar 2021 03:47:20 GMT
- Title: A VAE-Based Bayesian Bidirectional LSTM for Renewable Energy Forecasting
- Authors: Devinder Kaur, Shama Naz Islam, and Md. Apel Mahmud
- Abstract summary: intermittent nature of renewable energy poses new challenges to the network operational planning with underlying uncertainties.
This paper proposes a novel Bayesian probabilistic technique for forecasting renewable power generation by addressing data and model uncertainties.
It is inferred from the numerical results that VAE-Bayesian BiLSTM outperforms other probabilistic deep learning methods in terms of forecasting accuracy and computational efficiency for different sizes of the dataset.
- Score: 0.4588028371034407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advancement in distributed generation technologies in modern power
systems has led to a widespread integration of renewable power generation at
customer side. However, the intermittent nature of renewable energy pose new
challenges to the network operational planning with underlying uncertainties.
This paper proposes a novel Bayesian probabilistic technique for forecasting
renewable power generation by addressing data and model uncertainties by
integrating bidirectional long short-term memory (BiLSTM) neural networks while
compressing the weight parameters using variational autoencoder (VAE). Existing
Bayesian deep learning methods suffer from high computational complexities as
they require to draw a large number of samples from weight parameters expressed
in the form of probability distributions. The proposed method can deal with
uncertainty present in model and data in a more computationally efficient
manner by reducing the dimensionality of model parameters. The proposed method
is evaluated using pinball loss, reconstruction error, and other forecasting
evaluation metrics. It is inferred from the numerical results that VAE-Bayesian
BiLSTM outperforms other probabilistic deep learning methods in terms of
forecasting accuracy and computational efficiency for different sizes of the
dataset.
Related papers
- Preconditioned Inexact Stochastic ADMM for Deep Model [35.37705488695026]
This paper develops an algorithm, PISA, which enables scalable parallel computing and supports various second-moment schemes.
Grounded in rigorous theoretical guarantees, the algorithm converges under the sole assumption of Lipschitz of the gradient.
Comprehensive experimental evaluations for or fine-tuning diverse FMs, including vision models, large language models, reinforcement learning models, generative adversarial networks, and recurrent neural networks, demonstrate its superior numerical performance compared to various state-of-the-art Directions.
arXiv Detail & Related papers (2025-02-15T12:28:51Z) - In-Context Freeze-Thaw Bayesian Optimization for Hyperparameter Optimization [35.74766507227412]
We propose FT-PFN, a novel surrogate for Freeze-thaw style BO.
FT-PFN is a prior-data fitted network (PFN) that leverages the transformers' in-context learning ability.
We show that, when combined with our novel acquisition mechanism (I-random), the resulting in-context freeze-thaw BO method (ifBO) yields new state-of-the-art performance.
arXiv Detail & Related papers (2024-04-25T17:40:52Z) - Nonparametric End-to-End Probabilistic Forecasting of Distributed Generation Outputs Considering Missing Data Imputation [12.601429509633636]
We introduce a nonparametric end-to-end method for probabilistic forecasting of distributed renewable generation outputs.
We design an end-to-end training process that includes missing data imputation.
arXiv Detail & Related papers (2024-03-31T16:17:59Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - A Free Lunch with Influence Functions? Improving Neural Network
Estimates with Concepts from Semiparametric Statistics [41.99023989695363]
We explore the potential for semiparametric theory to be used to improve neural networks and machine learning algorithms.
We propose a new neural network method MultiNet, which seeks the flexibility and diversity of an ensemble using a single architecture.
arXiv Detail & Related papers (2022-02-18T09:35:51Z) - Information Theoretic Structured Generative Modeling [13.117829542251188]
A novel generative model framework called the structured generative model (SGM) is proposed that makes straightforward optimization possible.
The implementation employs a single neural network driven by an orthonormal input to a single white noise source adapted to learn an infinite Gaussian mixture model.
Preliminary results show that SGM significantly improves MINE estimation in terms of data efficiency and variance, conventional and variational Gaussian mixture models, as well as for training adversarial networks.
arXiv Detail & Related papers (2021-10-12T07:44:18Z) - Neural Networks for Parameter Estimation in Intractable Models [0.0]
We show how to estimate parameters from max-stable processes, where inference is exceptionally challenging.
We use data from model simulations as input and train deep neural networks to learn statistical parameters.
arXiv Detail & Related papers (2021-07-29T21:59:48Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Probabilistic Load Forecasting Based on Adaptive Online Learning [7.373617024876726]
This paper presents a method for probabilistic load forecasting based on the adaptive online learning of hidden Markov models.
We propose learning and forecasting techniques with theoretical guarantees, and experimentally assess their performance in multiple scenarios.
The results show that the proposed method can significantly improve the performance of existing techniques for a wide range of scenarios.
arXiv Detail & Related papers (2020-11-30T12:02:26Z) - Bayesian Optimization Meets Laplace Approximation for Robotic
Introspection [41.117361086267806]
We introduce a scalable Laplace Approximation (LA) technique to make Deep Neural Networks (DNNs) more introspective.
In particular, we propose a novel Bayesian Optimization (BO) algorithm to mitigate their tendency of under-fitting the true weight posterior.
We show that the proposed framework can be scaled up to large datasets and architectures.
arXiv Detail & Related papers (2020-10-30T09:28:10Z) - Novel and flexible parameter estimation methods for data-consistent
inversion in mechanistic modeling [0.13635858675752988]
We introduce new methods to solve inverse problems (SIP) based on rejection sampling, Markov chain Monte Carlo, and generative adversarial networks (GANs)
To overcome limitations of SIP, we reformulate SIP based on constrained optimization and present a novel GAN to solve the constrained optimization problem.
arXiv Detail & Related papers (2020-09-17T13:13:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.