On the Quantification of Image Reconstruction Uncertainty without
Training Data
- URL: http://arxiv.org/abs/2311.09639v1
- Date: Thu, 16 Nov 2023 07:46:47 GMT
- Title: On the Quantification of Image Reconstruction Uncertainty without
Training Data
- Authors: Sirui Bi, Victor Fung, Jiaxin Zhang
- Abstract summary: We propose a deep variational framework that leverages a deep generative model to learn an approximate posterior distribution.
We parameterize the target posterior using a flow-based model and minimize their Kullback-Leibler (KL) divergence to achieve accurate uncertainty estimation.
Our results indicate that our method provides reliable and high-quality image reconstruction with robust uncertainty estimation.
- Score: 5.057039869893053
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computational imaging plays a pivotal role in determining hidden information
from sparse measurements. A robust inverse solver is crucial to fully
characterize the uncertainty induced by these measurements, as it allows for
the estimation of the complete posterior of unrecoverable targets. This, in
turn, facilitates a probabilistic interpretation of observational data for
decision-making. In this study, we propose a deep variational framework that
leverages a deep generative model to learn an approximate posterior
distribution to effectively quantify image reconstruction uncertainty without
the need for training data. We parameterize the target posterior using a
flow-based model and minimize their Kullback-Leibler (KL) divergence to achieve
accurate uncertainty estimation. To bolster stability, we introduce a robust
flow-based model with bi-directional regularization and enhance expressivity
through gradient boosting. Additionally, we incorporate a space-filling design
to achieve substantial variance reduction on both latent prior space and target
posterior space. We validate our method on several benchmark tasks and two
real-world applications, namely fastMRI and black hole image reconstruction.
Our results indicate that our method provides reliable and high-quality image
reconstruction with robust uncertainty estimation.
Related papers
- One-step Generative Diffusion for Realistic Extreme Image Rescaling [47.89362819768323]
We propose a novel framework called One-Step Image Rescaling Diffusion (OSIRDiff) for extreme image rescaling.
OSIRDiff performs rescaling operations in the latent space of a pre-trained autoencoder.
It effectively leverages powerful natural image priors learned by a pre-trained text-to-image diffusion model.
arXiv Detail & Related papers (2024-08-17T09:51:42Z) - Uncertainty Quantification for Deep Unrolling-Based Computational
Imaging [0.0]
We propose a learning-based image reconstruction framework that incorporates the observation model into the reconstruction task.
We show that the proposed framework can provide uncertainty information while achieving comparable reconstruction performance to state-of-the-art deep unrolling methods.
arXiv Detail & Related papers (2022-07-02T00:22:49Z) - A Probabilistic Deep Image Prior for Computational Tomography [0.19573380763700707]
Existing deep-learning based tomographic image reconstruction methods do not provide accurate estimates of reconstruction uncertainty.
We construct a Bayesian prior for tomographic reconstruction, which combines the classical total variation (TV) regulariser with the modern deep image prior (DIP)
For the inference, we develop an approach based on the linearised Laplace method, which is scalable to high-dimensional settings.
arXiv Detail & Related papers (2022-02-28T14:47:14Z) - Robust Depth Completion with Uncertainty-Driven Loss Functions [60.9237639890582]
We introduce uncertainty-driven loss functions to improve the robustness of depth completion and handle the uncertainty in depth completion.
Our method has been tested on KITTI Depth Completion Benchmark and achieved the state-of-the-art robustness performance in terms of MAE, IMAE, and IRMSE metrics.
arXiv Detail & Related papers (2021-12-15T05:22:34Z) - PDC-Net+: Enhanced Probabilistic Dense Correspondence Network [161.76275845530964]
Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, capable of estimating accurate dense correspondences.
We develop an architecture and an enhanced training strategy tailored for robust and generalizable uncertainty prediction.
Our approach obtains state-of-the-art results on multiple challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-09-28T17:56:41Z) - Learning Accurate Dense Correspondences and When to Trust Them [161.76275845530964]
We aim to estimate a dense flow field relating two images, coupled with a robust pixel-wise confidence map.
We develop a flexible probabilistic approach that jointly learns the flow prediction and its uncertainty.
Our approach obtains state-of-the-art results on challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-01-05T18:54:11Z) - Quantifying Sources of Uncertainty in Deep Learning-Based Image
Reconstruction [5.129343375966527]
We propose a scalable and efficient framework to simultaneously quantify aleatoric and epistemic uncertainties in learned iterative image reconstruction.
We show that our method exhibits competitive performance against conventional benchmarks for computed tomography with both sparse view and limited angle data.
arXiv Detail & Related papers (2020-11-17T04:12:52Z) - Probabilistic 3D surface reconstruction from sparse MRI information [58.14653650521129]
We present a novel probabilistic deep learning approach for concurrent 3D surface reconstruction from sparse 2D MR image data and aleatoric uncertainty prediction.
Our method is capable of reconstructing large surface meshes from three quasi-orthogonal MR imaging slices from limited training sets.
arXiv Detail & Related papers (2020-10-05T14:18:52Z) - Sampling possible reconstructions of undersampled acquisitions in MR
imaging [9.75702493778194]
Undersampling the k-space during MR saves time, however results in an ill-posed inversion problem, leading to an infinite set of images as possible solutions.
Traditionally, this is tackled as a reconstruction problem by searching for a single "best" image out of this solution set according to some chosen regularization or prior.
We propose a method that instead returns multiple images which are possible under the acquisition model and the chosen prior to capture the uncertainty in the inversion process.
arXiv Detail & Related papers (2020-09-30T18:20:06Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.