Scalable Uncertainty for Computer Vision with Functional Variational
Inference
- URL: http://arxiv.org/abs/2003.03396v1
- Date: Fri, 6 Mar 2020 19:09:42 GMT
- Title: Scalable Uncertainty for Computer Vision with Functional Variational
Inference
- Authors: Eduardo D C Carvalho, Ronald Clark, Andrea Nicastro, Paul H J Kelly
- Abstract summary: We leverage the formulation of variational inference in function space.
We obtain predictive uncertainty estimates at the cost of a single forward pass through any chosen CNN architecture.
We propose numerically efficient algorithms which enable fast training in the context of high-dimensional tasks.
- Score: 18.492485304537134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Deep Learning continues to yield successful applications in Computer
Vision, the ability to quantify all forms of uncertainty is a paramount
requirement for its safe and reliable deployment in the real-world. In this
work, we leverage the formulation of variational inference in function space,
where we associate Gaussian Processes (GPs) to both Bayesian CNN priors and
variational family. Since GPs are fully determined by their mean and covariance
functions, we are able to obtain predictive uncertainty estimates at the cost
of a single forward pass through any chosen CNN architecture and for any
supervised learning task. By leveraging the structure of the induced covariance
matrices, we propose numerically efficient algorithms which enable fast
training in the context of high-dimensional tasks such as depth estimation and
semantic segmentation. Additionally, we provide sufficient conditions for
constructing regression loss functions whose probabilistic counterparts are
compatible with aleatoric uncertainty quantification.
Related papers
- Statistical Inference for Temporal Difference Learning with Linear Function Approximation [62.69448336714418]
Temporal Difference (TD) learning, arguably the most widely used for policy evaluation, serves as a natural framework for this purpose.
In this paper, we study the consistency properties of TD learning with Polyak-Ruppert averaging and linear function approximation, and obtain three significant improvements over existing results.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - Alpha-VI DeepONet: A prior-robust variational Bayesian approach for enhancing DeepONets with uncertainty quantification [0.0]
We introduce a novel deep operator network (DeepONet) framework that incorporates generalised variational inference (GVI)
By incorporating Bayesian neural networks as the building blocks for the branch and trunk networks, our framework endows DeepONet with uncertainty quantification.
We demonstrate that modifying the variational objective function yields superior results in terms of minimising the mean squared error.
arXiv Detail & Related papers (2024-08-01T16:22:03Z) - Rigorous Probabilistic Guarantees for Robust Counterfactual Explanations [80.86128012438834]
We show for the first time that computing the robustness of counterfactuals with respect to plausible model shifts is NP-complete.
We propose a novel probabilistic approach which is able to provide tight estimates of robustness with strong guarantees.
arXiv Detail & Related papers (2024-07-10T09:13:11Z) - Enhancing Reliability of Neural Networks at the Edge: Inverted
Normalization with Stochastic Affine Transformations [0.22499166814992438]
We propose a method to inherently enhance the robustness and inference accuracy of BayNNs deployed in in-memory computing architectures.
Empirical results show a graceful degradation in inference accuracy, with an improvement of up to $58.11%$.
arXiv Detail & Related papers (2024-01-23T00:27:31Z) - PICProp: Physics-Informed Confidence Propagation for Uncertainty
Quantification [30.66285259412019]
This paper introduces and studies confidence interval estimation for deterministic partial differential equations as a novel problem.
That is, to propagate confidence, in the form of CIs, from data locations to the entire domain with probabilistic guarantees.
We propose a method, termed Physics-Informed Confidence propagation (PICProp), based on bi-level optimization to compute a valid CI without making heavy assumptions.
arXiv Detail & Related papers (2023-10-10T18:24:50Z) - Toward Robust Uncertainty Estimation with Random Activation Functions [3.0586855806896045]
We propose a novel approach for uncertainty quantification via ensembles, called Random Activation Functions (RAFs) Ensemble.
RAFs Ensemble outperforms state-of-the-art ensemble uncertainty quantification methods on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-02-28T13:17:56Z) - Modeling Uncertain Feature Representation for Domain Generalization [49.129544670700525]
We show that our method consistently improves the network generalization ability on multiple vision tasks.
Our methods are simple yet effective and can be readily integrated into networks without additional trainable parameters or loss constraints.
arXiv Detail & Related papers (2023-01-16T14:25:02Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Uncertainty Modeling for Out-of-Distribution Generalization [56.957731893992495]
We argue that the feature statistics can be properly manipulated to improve the generalization ability of deep learning models.
Common methods often consider the feature statistics as deterministic values measured from the learned features.
We improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training.
arXiv Detail & Related papers (2022-02-08T16:09:12Z) - PDC-Net+: Enhanced Probabilistic Dense Correspondence Network [161.76275845530964]
Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, capable of estimating accurate dense correspondences.
We develop an architecture and an enhanced training strategy tailored for robust and generalizable uncertainty prediction.
Our approach obtains state-of-the-art results on multiple challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-09-28T17:56:41Z) - Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via
Online High-Confidence Change-Point Detection [7.685002911021767]
We introduce an algorithm that efficiently learns policies in non-stationary environments.
It analyzes a possibly infinite stream of data and computes, in real-time, high-confidence change-point detection statistics.
We show that (i) this algorithm minimizes the delay until unforeseen changes to a context are detected, thereby allowing for rapid responses.
arXiv Detail & Related papers (2021-05-20T01:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.