Autoregressive Quantile Flows for Predictive Uncertainty Estimation
- URL: http://arxiv.org/abs/2112.04643v1
- Date: Thu, 9 Dec 2021 01:11:26 GMT
- Title: Autoregressive Quantile Flows for Predictive Uncertainty Estimation
- Authors: Phillip Si, Allan Bishop, Volodymyr Kuleshov
- Abstract summary: We propose Autoregressive Quantile Flows, a flexible class of probabilistic models over high-dimensional variables.
These models are instances of autoregressive flows trained using a novel objective based on proper scoring rules.
- Score: 7.184701179854522
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Numerous applications of machine learning involve predicting flexible
probability distributions over model outputs. We propose Autoregressive
Quantile Flows, a flexible class of probabilistic models over high-dimensional
variables that can be used to accurately capture predictive aleatoric
uncertainties. These models are instances of autoregressive flows trained using
a novel objective based on proper scoring rules, which simplifies the
calculation of computationally expensive determinants of Jacobians during
training and supports new types of neural architectures. We demonstrate that
these models can be used to parameterize predictive conditional distributions
and improve the quality of probabilistic predictions on time series forecasting
and object detection.
Related papers
- Conformalised Conditional Normalising Flows for Joint Prediction Regions in time series [7.200880964149064]
Conformal Prediction offers a powerful framework for quantifying uncertainty in machine learning models.
Applying conformal prediction to probabilistic generative models, such as Normalising Flows is not straightforward.
This work proposes a novel method to conformalise conditional normalising flows, specifically addressing the problem of obtaining prediction regions.
arXiv Detail & Related papers (2024-11-26T02:19:13Z) - Towards Generalizable and Interpretable Motion Prediction: A Deep
Variational Bayes Approach [54.429396802848224]
This paper proposes an interpretable generative model for motion prediction with robust generalizability to out-of-distribution cases.
For interpretability, the model achieves the target-driven motion prediction by estimating the spatial distribution of long-term destinations.
Experiments on motion prediction datasets validate that the fitted model can be interpretable and generalizable.
arXiv Detail & Related papers (2024-03-10T04:16:04Z) - On the Efficient Marginalization of Probabilistic Sequence Models [3.5897534810405403]
This dissertation focuses on using autoregressive models to answer complex probabilistic queries.
We develop a class of novel and efficient approximation techniques for marginalization in sequential models that are model-agnostic.
arXiv Detail & Related papers (2024-03-06T19:29:08Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - User-defined Event Sampling and Uncertainty Quantification in Diffusion
Models for Physical Dynamical Systems [49.75149094527068]
We show that diffusion models can be adapted to make predictions and provide uncertainty quantification for chaotic dynamical systems.
We develop a probabilistic approximation scheme for the conditional score function which converges to the true distribution as the noise level decreases.
We are able to sample conditionally on nonlinear userdefined events at inference time, and matches data statistics even when sampling from the tails of the distribution.
arXiv Detail & Related papers (2023-06-13T03:42:03Z) - Conformal prediction for the design problem [72.14982816083297]
In many real-world deployments of machine learning, we use a prediction algorithm to choose what data to test next.
In such settings, there is a distinct type of distribution shift between the training and test data.
We introduce a method to quantify predictive uncertainty in such settings.
arXiv Detail & Related papers (2022-02-08T02:59:12Z) - Probabilistic Forecasting with Generative Networks via Scoring Rule
Minimization [5.5643498845134545]
We use generative neural networks to parametrize distributions on high-dimensional spaces by transforming draws from a latent variable.
We train generative networks to minimize a predictive-sequential (or prequential) scoring rule on a recorded temporal sequence of the phenomenon of interest.
Our method outperforms state-of-the-art adversarial approaches, especially in probabilistic calibration.
arXiv Detail & Related papers (2021-12-15T15:51:12Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Probabilistic Time Series Forecasting with Implicit Quantile Networks [0.7249731529275341]
We combine an autoregressive recurrent neural network to model temporal dynamics with Implicit Quantile Networks to learn a large class of distributions over a time-series target.
Our approach is favorable in terms of point-wise prediction accuracy as well as on estimating the underlying temporal distribution.
arXiv Detail & Related papers (2021-07-08T10:37:24Z) - Learning Interpretable Deep State Space Model for Probabilistic Time
Series Forecasting [98.57851612518758]
Probabilistic time series forecasting involves estimating the distribution of future based on its history.
We propose a deep state space model for probabilistic time series forecasting whereby the non-linear emission model and transition model are parameterized by networks.
We show in experiments that our model produces accurate and sharp probabilistic forecasts.
arXiv Detail & Related papers (2021-01-31T06:49:33Z) - A comprehensive study on the prediction reliability of graph neural
networks for virtual screening [0.0]
We investigate the effects of model architectures, regularization methods, and loss functions on the prediction performance and reliability of classification results.
Our result highlights that correct choice of regularization and inference methods is evidently important to achieve high success rate.
arXiv Detail & Related papers (2020-03-17T10:13:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.