Uncertainty-Aware Regularization for Image-to-Image Translation
- URL: http://arxiv.org/abs/2412.01705v1
- Date: Sun, 24 Nov 2024 14:05:27 GMT
- Title: Uncertainty-Aware Regularization for Image-to-Image Translation
- Authors: Anuja Vats, Ivar Farup, Marius Pedersen, Kiran Raja,
- Abstract summary: We propose a method to improve uncertainty estimation in medical Image-to-Image (I2I) translation.
Our model integrates aleatoric uncertainty and employs Uncertainty-Aware Regularization (UAR) inspired by simple priors to refine uncertainty estimates.
- Score: 2.5274064055508174
- License:
- Abstract: The importance of quantifying uncertainty in deep networks has become paramount for reliable real-world applications. In this paper, we propose a method to improve uncertainty estimation in medical Image-to-Image (I2I) translation. Our model integrates aleatoric uncertainty and employs Uncertainty-Aware Regularization (UAR) inspired by simple priors to refine uncertainty estimates and enhance reconstruction quality. We show that by leveraging simple priors on parameters, our approach captures more robust uncertainty maps, effectively refining them to indicate precisely where the network encounters difficulties, while being less affected by noise. Our experiments demonstrate that UAR not only improves translation performance, but also provides better uncertainty estimations, particularly in the presence of noise and artifacts. We validate our approach using two medical imaging datasets, showcasing its effectiveness in maintaining high confidence in familiar regions while accurately identifying areas of uncertainty in novel/ambiguous scenarios.
Related papers
- Uncertainty Quantification in Stereo Matching [61.73532883992135]
We propose a new framework for stereo matching and its uncertainty quantification.
We adopt Bayes risk as a measure of uncertainty and estimate data and model uncertainty separately.
We apply our uncertainty method to improve prediction accuracy by selecting data points with small uncertainties.
arXiv Detail & Related papers (2024-12-24T23:28:20Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - EDUE: Expert Disagreement-Guided One-Pass Uncertainty Estimation for Medical Image Segmentation [1.757276115858037]
This paper proposes an Expert Disagreement-Guided Uncertainty Estimation (EDUE) for medical image segmentation.
By leveraging variability in ground-truth annotations from multiple raters, we guide the model during training and incorporate random sampling-based strategies to enhance calibration confidence.
arXiv Detail & Related papers (2024-03-25T10:13:52Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Instant Uncertainty Calibration of NeRFs Using a Meta-calibrator [60.47106421809998]
We introduce the concept of a meta-calibrator that performs uncertainty calibration for NeRFs with a single forward pass.
We show that the meta-calibrator can generalize on unseen scenes and achieves well-calibrated and state-of-the-art uncertainty for NeRFs.
arXiv Detail & Related papers (2023-12-04T21:29:31Z) - Equivariant Bootstrapping for Uncertainty Quantification in Imaging
Inverse Problems [0.24475591916185502]
We present a new uncertainty quantification methodology based on an equivariant formulation of the parametric bootstrap algorithm.
The proposed methodology is general and can be easily applied with any image reconstruction technique.
We demonstrate the proposed approach with a series of numerical experiments and through comparisons with alternative uncertainty quantification strategies.
arXiv Detail & Related papers (2023-10-18T09:43:15Z) - Principal Uncertainty Quantification with Spatial Correlation for Image
Restoration Problems [35.46703074728443]
PUQ -- Principal Uncertainty Quantification -- is a novel definition and corresponding analysis of uncertainty regions.
We derive uncertainty intervals around principal components of the empirical posterior distribution, forming an ambiguity region.
Our approach is verified through experiments on image colorization, super-resolution, and inpainting.
arXiv Detail & Related papers (2023-05-17T11:08:13Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Bayesian autoencoders with uncertainty quantification: Towards
trustworthy anomaly detection [78.24964622317634]
In this work, the formulation of Bayesian autoencoders (BAEs) is adopted to quantify the total anomaly uncertainty.
To evaluate the quality of uncertainty, we consider the task of classifying anomalies with the additional option of rejecting predictions of high uncertainty.
Our experiments demonstrate the effectiveness of the BAE and total anomaly uncertainty on a set of benchmark datasets and two real datasets for manufacturing.
arXiv Detail & Related papers (2022-02-25T12:20:04Z) - Towards Reducing Aleatoric Uncertainty for Medical Imaging Tasks [5.220940151628734]
Uncertainty in predictions can be attributed to noise or randomness in data (aleatoric) and incorrect model inferences (epistemic)
This work proposes a novel approach that interprets data uncertainty estimated from a self-supervised task as noise inherent to the data.
Our findings demonstrate the effectiveness of the proposed approach in significantly reducing the aleatoric uncertainty in the image segmentation task.
arXiv Detail & Related papers (2021-10-21T09:31:00Z) - Uncertainty-aware GAN with Adaptive Loss for Robust MRI Image
Enhancement [3.222802562733787]
Conditional generative adversarial networks (GANs) have shown improved performance in learning photo-realistic image-to-image mappings.
This paper proposes a GAN-based framework that (i)models an adaptive loss function for robustness to OOD-noisy data and (ii)estimates the per-voxel uncertainty in the predictions.
We demonstrate our method on two key applications in medical imaging: (i)undersampled magnetic resonance imaging (MRI) reconstruction (ii)MRI modality propagation.
arXiv Detail & Related papers (2021-10-07T11:29:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.