Principal Uncertainty Quantification with Spatial Correlation for Image
Restoration Problems
- URL: http://arxiv.org/abs/2305.10124v3
- Date: Sat, 20 Jan 2024 09:34:21 GMT
- Title: Principal Uncertainty Quantification with Spatial Correlation for Image
Restoration Problems
- Authors: Omer Belhasin, Yaniv Romano, Daniel Freedman, Ehud Rivlin, Michael
Elad
- Abstract summary: PUQ -- Principal Uncertainty Quantification -- is a novel definition and corresponding analysis of uncertainty regions.
We derive uncertainty intervals around principal components of the empirical posterior distribution, forming an ambiguity region.
Our approach is verified through experiments on image colorization, super-resolution, and inpainting.
- Score: 35.46703074728443
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertainty quantification for inverse problems in imaging has drawn much
attention lately. Existing approaches towards this task define uncertainty
regions based on probable values per pixel, while ignoring spatial correlations
within the image, resulting in an exaggerated volume of uncertainty. In this
paper, we propose PUQ (Principal Uncertainty Quantification) -- a novel
definition and corresponding analysis of uncertainty regions that takes into
account spatial relationships within the image, thus providing reduced volume
regions. Using recent advancements in generative models, we derive uncertainty
intervals around principal components of the empirical posterior distribution,
forming an ambiguity region that guarantees the inclusion of true unseen values
with a user-defined confidence probability. To improve computational efficiency
and interpretability, we also guarantee the recovery of true unseen values
using only a few principal directions, resulting in more informative
uncertainty regions. Our approach is verified through experiments on image
colorization, super-resolution, and inpainting; its effectiveness is shown
through comparison to baseline methods, demonstrating significantly tighter
uncertainty regions.
Related papers
- Probabilistic Contrastive Learning with Explicit Concentration on the Hypersphere [3.572499139455308]
This paper introduces a new perspective on incorporating uncertainty into contrastive learning by embedding representations within a spherical space.
We leverage the concentration parameter, kappa, as a direct, interpretable measure to quantify uncertainty explicitly.
arXiv Detail & Related papers (2024-05-26T07:08:13Z) - On the Quantification of Image Reconstruction Uncertainty without
Training Data [5.057039869893053]
We propose a deep variational framework that leverages a deep generative model to learn an approximate posterior distribution.
We parameterize the target posterior using a flow-based model and minimize their Kullback-Leibler (KL) divergence to achieve accurate uncertainty estimation.
Our results indicate that our method provides reliable and high-quality image reconstruction with robust uncertainty estimation.
arXiv Detail & Related papers (2023-11-16T07:46:47Z) - Equivariant Bootstrapping for Uncertainty Quantification in Imaging
Inverse Problems [0.24475591916185502]
We present a new uncertainty quantification methodology based on an equivariant formulation of the parametric bootstrap algorithm.
The proposed methodology is general and can be easily applied with any image reconstruction technique.
We demonstrate the proposed approach with a series of numerical experiments and through comparisons with alternative uncertainty quantification strategies.
arXiv Detail & Related papers (2023-10-18T09:43:15Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - Variational Voxel Pseudo Image Tracking [127.46919555100543]
Uncertainty estimation is an important task for critical problems, such as robotics and autonomous driving.
We propose a Variational Neural Network-based version of a Voxel Pseudo Image Tracking (VPIT) method for 3D Single Object Tracking.
arXiv Detail & Related papers (2023-02-12T13:34:50Z) - BayesIMP: Uncertainty Quantification for Causal Data Fusion [52.184885680729224]
We study the causal data fusion problem, where datasets pertaining to multiple causal graphs are combined to estimate the average treatment effect of a target variable.
We introduce a framework which combines ideas from probabilistic integration and kernel mean embeddings to represent interventional distributions in the reproducing kernel Hilbert space.
arXiv Detail & Related papers (2021-06-07T10:14:18Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Bayesian Triplet Loss: Uncertainty Quantification in Image Retrieval [10.743633102172236]
Uncertainty quantification in image retrieval is crucial for downstream decisions.
We present a new method that views image embeddings as features rather than deterministic features.
We derive a variational approximation of the posterior, called the Bayesian triplet loss, that produces state-of-the-art uncertainty estimates.
arXiv Detail & Related papers (2020-11-25T11:47:33Z) - Quantifying Sources of Uncertainty in Deep Learning-Based Image
Reconstruction [5.129343375966527]
We propose a scalable and efficient framework to simultaneously quantify aleatoric and epistemic uncertainties in learned iterative image reconstruction.
We show that our method exhibits competitive performance against conventional benchmarks for computed tomography with both sparse view and limited angle data.
arXiv Detail & Related papers (2020-11-17T04:12:52Z) - Temporal Difference Uncertainties as a Signal for Exploration [76.6341354269013]
An effective approach to exploration in reinforcement learning is to rely on an agent's uncertainty over the optimal policy.
In this paper, we highlight that value estimates are easily biased and temporally inconsistent.
We propose a novel method for estimating uncertainty over the value function that relies on inducing a distribution over temporal difference errors.
arXiv Detail & Related papers (2020-10-05T18:11:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.