Learning to Predict Error for MRI Reconstruction
- URL: http://arxiv.org/abs/2002.05582v3
- Date: Wed, 7 Jul 2021 00:14:13 GMT
- Title: Learning to Predict Error for MRI Reconstruction
- Authors: Shi Hu and Nicola Pezzotti and Max Welling
- Abstract summary: We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
- Score: 67.76632988696943
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In healthcare applications, predictive uncertainty has been used to assess
predictive accuracy. In this paper, we demonstrate that predictive uncertainty
estimated by the current methods does not highly correlate with prediction
error by decomposing the latter into random and systematic errors, and showing
that the former is equivalent to the variance of the random error. In addition,
we observe that current methods unnecessarily compromise performance by
modifying the model and training loss to estimate the target and uncertainty
jointly. We show that estimating them separately without modifications improves
performance. Following this, we propose a novel method that estimates the
target labels and magnitude of the prediction error in two steps. We
demonstrate this method on a large-scale MRI reconstruction task, and achieve
significantly better results than the state-of-the-art uncertainty estimation
methods.
Related papers
- Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - On Calibrated Model Uncertainty in Deep Learning [0.0]
We extend the approximate inference for the loss-calibrated Bayesian framework to dropweights based Bayesian neural networks.
We show that decisions informed by loss-calibrated uncertainty can improve diagnostic performance to a greater extent than straightforward alternatives.
arXiv Detail & Related papers (2022-06-15T20:16:32Z) - Uncertainty Estimation for Heatmap-based Landmark Localization [4.673063715963989]
We propose Quantile Binning, a data-driven method to categorise predictions by uncertainty with estimated error bounds.
We demonstrate this framework by comparing and contrasting three uncertainty measures.
We conclude by illustrating how filtering out gross mispredictions caught in our Quantile Bins significantly improves the proportion of predictions under an acceptable error threshold.
arXiv Detail & Related papers (2022-03-04T14:40:44Z) - Diffusion Tensor Estimation with Uncertainty Calibration [6.5085381751712506]
We propose a deep learning method to estimate the diffusion tensor and compute the estimation uncertainty.
Data-dependent uncertainty is computed directly by the network and learned via loss attenuation.
We show that the estimation uncertainties computed by the new method can highlight the model's biases, detect domain shift, and reflect the strength of noise in the measurements.
arXiv Detail & Related papers (2021-11-21T15:58:01Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Loss Estimators Improve Model Generalization [36.520569284970456]
We propose to train a loss estimator alongside the predictive model, using a contrastive training objective, to directly estimate the prediction uncertainties.
We show the impact of loss estimators on model generalization, in terms of both its fidelity on in-distribution data and its ability to detect out of distribution samples or new classes unseen during training.
arXiv Detail & Related papers (2021-03-05T16:35:10Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.