Principled Confidence Estimation for Deep Computed Tomography
- URL: http://arxiv.org/abs/2602.05812v1
- Date: Thu, 05 Feb 2026 16:04:19 GMT
- Title: Principled Confidence Estimation for Deep Computed Tomography
- Authors: Matteo Gätzner, Johannes Kirschner,
- Abstract summary: We present a principled framework for confidence estimation in computed tomography (CT) reconstruction.<n>We establish confidence regions with theoretical coverage guarantees for deep-learning-based CT reconstructions.
- Score: 3.8642937395065124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a principled framework for confidence estimation in computed tomography (CT) reconstruction. Based on the sequential likelihood mixing framework (Kirschner et al., 2025), we establish confidence regions with theoretical coverage guarantees for deep-learning-based CT reconstructions. We consider a realistic forward model following the Beer-Lambert law, i.e., a log-linear forward model with Poisson noise, closely reflecting clinical and scientific imaging conditions. The framework is general and applies to both classical algorithms and deep learning reconstruction methods, including U-Nets, U-Net ensembles, and generative Diffusion models. Empirically, we demonstrate that deep reconstruction methods yield substantially tighter confidence regions than classical reconstructions, without sacrificing theoretical coverage guarantees. Our approach allows the detection of hallucinations in reconstructed images and provides interpretable visualizations of confidence regions. This establishes deep models not only as powerful estimators, but also as reliable tools for uncertainty-aware medical imaging.
Related papers
- Conformalized Generative Bayesian Imaging: An Uncertainty Quantification Framework for Computational Imaging [0.0]
Uncertainty quantification plays an important role in achieving trustworthy and reliable learning-based computational imaging.<n>Recent advances in generative modeling and Bayesian neural networks have enabled the development of uncertainty-aware image reconstruction methods.<n>We present a scalable framework that can quantify both aleatoric and epistemic uncertainties.
arXiv Detail & Related papers (2025-04-10T12:30:46Z) - A Bayesian Approach to Robust Inverse Reinforcement Learning [54.24816623644148]
We consider a Bayesian approach to offline model-based inverse reinforcement learning (IRL)
The proposed framework differs from existing offline model-based IRL approaches by performing simultaneous estimation of the expert's reward function and subjective model of environment dynamics.
Our analysis reveals a novel insight that the estimated policy exhibits robust performance when the expert is believed to have a highly accurate model of the environment.
arXiv Detail & Related papers (2023-09-15T17:37:09Z) - Uncertainty Estimation and Out-of-Distribution Detection for Deep
Learning-Based Image Reconstruction using the Local Lipschitz [9.143327181265976]
Supervised deep learning-based approaches have been investigated for solving inverse problems including image reconstruction.
It is essential to assess whether a given input falls within the training data distribution for diagnostic purposes.
We propose a method based on the local Lipschitz-based metric to distinguish out-of-distribution images from in-distribution with an area under the curve of 99.94%.
arXiv Detail & Related papers (2023-05-12T17:17:01Z) - Uncertainty Quantification for Deep Unrolling-Based Computational
Imaging [0.0]
We propose a learning-based image reconstruction framework that incorporates the observation model into the reconstruction task.
We show that the proposed framework can provide uncertainty information while achieving comparable reconstruction performance to state-of-the-art deep unrolling methods.
arXiv Detail & Related papers (2022-07-02T00:22:49Z) - Learned reconstruction with convergence guarantees [3.9402707512848787]
We will specify relevant notions of convergence for data-driven image reconstruction.
An example that is highlighted is the role of ICNN, offering the possibility to combine the power of deep learning with classical convex regularization theory.
arXiv Detail & Related papers (2022-06-11T06:08:25Z) - A Probabilistic Deep Image Prior for Computational Tomography [0.19573380763700707]
Existing deep-learning based tomographic image reconstruction methods do not provide accurate estimates of reconstruction uncertainty.
We construct a Bayesian prior for tomographic reconstruction, which combines the classical total variation (TV) regulariser with the modern deep image prior (DIP)
For the inference, we develop an approach based on the linearised Laplace method, which is scalable to high-dimensional settings.
arXiv Detail & Related papers (2022-02-28T14:47:14Z) - PDC-Net+: Enhanced Probabilistic Dense Correspondence Network [161.76275845530964]
Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, capable of estimating accurate dense correspondences.
We develop an architecture and an enhanced training strategy tailored for robust and generalizable uncertainty prediction.
Our approach obtains state-of-the-art results on multiple challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-09-28T17:56:41Z) - Learning Accurate Dense Correspondences and When to Trust Them [161.76275845530964]
We aim to estimate a dense flow field relating two images, coupled with a robust pixel-wise confidence map.
We develop a flexible probabilistic approach that jointly learns the flow prediction and its uncertainty.
Our approach obtains state-of-the-art results on challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-01-05T18:54:11Z) - Probabilistic 3D surface reconstruction from sparse MRI information [58.14653650521129]
We present a novel probabilistic deep learning approach for concurrent 3D surface reconstruction from sparse 2D MR image data and aleatoric uncertainty prediction.
Our method is capable of reconstructing large surface meshes from three quasi-orthogonal MR imaging slices from limited training sets.
arXiv Detail & Related papers (2020-10-05T14:18:52Z) - Adaptive confidence thresholding for monocular depth estimation [83.06265443599521]
We propose a new approach to leverage pseudo ground truth depth maps of stereo images generated from self-supervised stereo matching methods.
The confidence map of the pseudo ground truth depth map is estimated to mitigate performance degeneration by inaccurate pseudo depth maps.
Experimental results demonstrate superior performance to state-of-the-art monocular depth estimation methods.
arXiv Detail & Related papers (2020-09-27T13:26:16Z) - Towards a Theoretical Understanding of the Robustness of Variational
Autoencoders [82.68133908421792]
We make inroads into understanding the robustness of Variational Autoencoders (VAEs) to adversarial attacks and other input perturbations.
We develop a novel criterion for robustness in probabilistic models: $r$-robustness.
We show that VAEs trained using disentangling methods score well under our robustness metrics.
arXiv Detail & Related papers (2020-07-14T21:22:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.