UncertaINR: Uncertainty Quantification of End-to-End Implicit Neural
Representations for Computed Tomography
- URL: http://arxiv.org/abs/2202.10847v3
- Date: Tue, 2 May 2023 20:59:18 GMT
- Title: UncertaINR: Uncertainty Quantification of End-to-End Implicit Neural
Representations for Computed Tomography
- Authors: Francisca Vasconcelos, Bobby He, Nalini Singh, Yee Whye Teh
- Abstract summary: Implicit neural representations (INRs) have achieved impressive results for scene reconstruction and computer graphics.
We study a Bayesian reformulation of INRs, UncertaINR, in the context of computed tomography.
We find that they achieve well-calibrated uncertainty, while retaining accuracy competitive with other classical, INR-based, and CNN-based reconstruction techniques.
- Score: 35.84136481440458
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural representations (INRs) have achieved impressive results for
scene reconstruction and computer graphics, where their performance has
primarily been assessed on reconstruction accuracy. As INRs make their way into
other domains, where model predictions inform high-stakes decision-making,
uncertainty quantification of INR inference is becoming critical. To that end,
we study a Bayesian reformulation of INRs, UncertaINR, in the context of
computed tomography, and evaluate several Bayesian deep learning
implementations in terms of accuracy and calibration. We find that they achieve
well-calibrated uncertainty, while retaining accuracy competitive with other
classical, INR-based, and CNN-based reconstruction techniques. Contrary to
common intuition in the Bayesian deep learning literature, we find that INRs
obtain the best calibration with computationally efficient Monte Carlo dropout,
outperforming Hamiltonian Monte Carlo and deep ensembles. Moreover, in contrast
to the best-performing prior approaches, UncertaINR does not require a large
training dataset, but only a handful of validation images.
Related papers
- Hierarchical uncertainty estimation for learning-based registration in neuroimaging [10.964653898591413]
We propose a principled way to propagate uncertainties (epistemic or aleatoric) estimated at the level of spatial location.
Experiments show that uncertainty-aware fitting of transformations improves the registration accuracy of brain MRI scans.
arXiv Detail & Related papers (2024-10-11T23:12:16Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - On the Calibration and Uncertainty with P\'{o}lya-Gamma Augmentation for
Dialog Retrieval Models [30.519215651368683]
dialog response retrieval models output a single score for a response on how relevant it is to a given question.
Bad calibration of deep neural network results in various uncertainty for the single score such that the unreliable predictions always misinform user decisions.
We present an efficient calibration and uncertainty estimation framework PG-DRR for dialog response retrieval models.
arXiv Detail & Related papers (2023-03-15T13:26:25Z) - A Benchmark on Uncertainty Quantification for Deep Learning Prognostics [0.0]
We assess some of the latest developments in the field of uncertainty quantification for prognostics deep learning.
This includes the state-of-the-art variational inference algorithms for Bayesian neural networks (BNN) as well as popular alternatives such as Monte Carlo Dropout (MCD), deep ensembles (DE) and heteroscedastic neural networks (HNN)
The performance of the methods is evaluated on a subset of the large NASA NCMAPSS dataset for aircraft engines.
arXiv Detail & Related papers (2023-02-09T16:12:47Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z) - On Calibration of Mixup Training for Deep Neural Networks [1.6242924916178283]
We argue and provide empirical evidence that, due to its fundamentals, Mixup does not necessarily improve calibration.
Our loss is inspired by Bayes decision theory and introduces a new training framework for designing losses for probabilistic modelling.
We provide state-of-the-art accuracy with consistent improvements in calibration performance.
arXiv Detail & Related papers (2020-03-22T16:54:31Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.