Uncertainty Prediction for Deep Sequential Regression Using Meta Models
- URL: http://arxiv.org/abs/2007.01350v2
- Date: Thu, 22 Jul 2021 21:59:50 GMT
- Title: Uncertainty Prediction for Deep Sequential Regression Using Meta Models
- Authors: Jiri Navratil, Matthew Arnold, Benjamin Elder
- Abstract summary: This paper describes a flexible method that can generate symmetric and asymmetric uncertainty estimates.
It makes no assumptions about stationarity, and outperforms competitive baselines on both drift and non drift scenarios.
- Score: 4.189643331553922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating high quality uncertainty estimates for sequential regression,
particularly deep recurrent networks, remains a challenging and open problem.
Existing approaches often make restrictive assumptions (such as stationarity)
yet still perform poorly in practice, particularly in presence of real world
non-stationary signals and drift. This paper describes a flexible method that
can generate symmetric and asymmetric uncertainty estimates, makes no
assumptions about stationarity, and outperforms competitive baselines on both
drift and non drift scenarios. This work helps make sequential regression more
effective and practical for use in real-world applications, and is a powerful
new addition to the modeling toolbox for sequential uncertainty quantification
in general.
Related papers
- Non-Asymptotic Uncertainty Quantification in High-Dimensional Learning [5.318766629972959]
Uncertainty quantification is a crucial but challenging task in many high-dimensional regression or learning problems.
We develop a new data-driven approach for UQ in regression that applies both to classical regression approaches as well as to neural networks.
arXiv Detail & Related papers (2024-07-18T16:42:10Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Towards Reliable Uncertainty Quantification via Deep Ensembles in
Multi-output Regression Task [0.0]
This study aims to investigate the deep ensemble approach, an approximate Bayesian inference, in the multi-output regression task.
A trend towards underestimation of uncertainty as it increases is observed for the first time.
We propose the deep ensemble framework that applies the post-hoc calibration method to improve its uncertainty quantification performance.
arXiv Detail & Related papers (2023-03-28T05:10:57Z) - How Reliable is Your Regression Model's Uncertainty Under Real-World
Distribution Shifts? [46.05502630457458]
We propose a benchmark of 8 image-based regression datasets with different types of challenging distribution shifts.
We find that while methods are well calibrated when there is no distribution shift, they all become highly overconfident on many of the benchmark datasets.
arXiv Detail & Related papers (2023-02-07T18:54:39Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Stochastic Training is Not Necessary for Generalization [57.04880404584737]
It is widely believed that the implicit regularization of gradient descent (SGD) is fundamental to the impressive generalization behavior we observe in neural networks.
In this work, we demonstrate that non-stochastic full-batch training can achieve strong performance on CIFAR-10 that is on-par with SGD.
arXiv Detail & Related papers (2021-09-29T00:50:00Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Uncertainty-Aware (UNA) Bases for Deep Bayesian Regression Using
Multi-Headed Auxiliary Networks [23.100727871427367]
We show that traditional training procedures for Neural Linear Models drastically underestimate uncertainty on out-of-distribution inputs.
We propose a novel training framework that captures useful predictive uncertainties for downstream tasks.
arXiv Detail & Related papers (2020-06-21T02:46:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.