Continuous and Distribution-free Probabilistic Wind Power Forecasting: A
Conditional Normalizing Flow Approach
- URL: http://arxiv.org/abs/2206.02433v1
- Date: Mon, 6 Jun 2022 08:48:58 GMT
- Title: Continuous and Distribution-free Probabilistic Wind Power Forecasting: A
Conditional Normalizing Flow Approach
- Authors: Honglin Wen, Pierre Pinson, Jinghuan Ma, Jie Gu, and Zhijian Jin
- Abstract summary: We present a data-driven approach for probabilistic wind power forecasting based on conditional normalizing flow (CNF)
In contrast with the existing, this approach is distribution-free (as for non-parametric and quantile-based approaches) and can directly yield continuous probability densities.
- Score: 1.684864188596015
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a data-driven approach for probabilistic wind power forecasting
based on conditional normalizing flow (CNF). In contrast with the existing,
this approach is distribution-free (as for non-parametric and quantile-based
approaches) and can directly yield continuous probability densities, hence
avoiding quantile crossing. It relies on a base distribution and a set of
bijective mappings. Both the shape parameters of the base distribution and the
bijective mappings are approximated with neural networks. Spline-based
conditional normalizing flow is considered owing to its non-affine
characteristics. Over the training phase, the model sequentially maps input
examples onto samples of base distribution, given the conditional contexts,
where parameters are estimated through maximum likelihood. To issue
probabilistic forecasts, one eventually maps samples of the base distribution
into samples of a desired distribution. Case studies based on open datasets
validate the effectiveness of the proposed model, and allows us to discuss its
advantages and caveats with respect to the state of the art.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - A Likelihood Based Approach to Distribution Regression Using Conditional Deep Generative Models [6.647819824559201]
We study the large-sample properties of a likelihood-based approach for estimating conditional deep generative models.
Our results lead to the convergence rate of a sieve maximum likelihood estimator for estimating the conditional distribution.
arXiv Detail & Related papers (2024-10-02T20:46:21Z) - Generating Synthetic Ground Truth Distributions for Multi-step Trajectory Prediction using Probabilistic Composite Bézier Curves [4.837320865223374]
This paper proposes a novel approach to synthetic dataset generation based on composite probabilistic B'ezier curves.
The paper showcases an exemplary trajectory prediction model evaluation using generated ground truth distribution data.
arXiv Detail & Related papers (2024-04-05T20:50:06Z) - Uncertainty Quantification via Stable Distribution Propagation [60.065272548502]
We propose a new approach for propagating stable probability distributions through neural networks.
Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity.
arXiv Detail & Related papers (2024-02-13T09:40:19Z) - Formulating Discrete Probability Flow Through Optimal Transport [29.213216002178306]
We first prove that the continuous probability flow is the Monge optimal transport map under certain conditions, and also present an equivalent evidence for discrete cases.
We then define the discrete probability flow in line with the principles of optimal transport.
Experiments on the synthetic toy dataset and the CIFAR-10 dataset have validated our proposed discrete probability flow.
arXiv Detail & Related papers (2023-11-07T11:03:27Z) - Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - Distributional Gradient Boosting Machines [77.34726150561087]
Our framework is based on XGBoost and LightGBM.
We show that our framework achieves state-of-the-art forecast accuracy.
arXiv Detail & Related papers (2022-04-02T06:32:19Z) - Learning Structured Gaussians to Approximate Deep Ensembles [10.055143995729415]
This paper proposes using a sparse-structured multivariate Gaussian to provide a closed-form approxorimator for dense image prediction tasks.
We capture the uncertainty and structured correlations in the predictions explicitly in a formal distribution, rather than implicitly through sampling alone.
We demonstrate the merits of our approach on monocular depth estimation and show that the advantages of our approach are obtained with comparable quantitative performance.
arXiv Detail & Related papers (2022-03-29T12:34:43Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Resampling Base Distributions of Normalizing Flows [0.0]
We introduce a base distribution for normalizing flows based on learned rejection sampling.
We develop suitable learning algorithms using both maximizing the log-likelihood and the optimization of the reverse Kullback-Leibler divergence.
arXiv Detail & Related papers (2021-10-29T14:44:44Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.