Image-to-Image Regression with Distribution-Free Uncertainty
Quantification and Applications in Imaging
- URL: http://arxiv.org/abs/2202.05265v1
- Date: Thu, 10 Feb 2022 18:59:56 GMT
- Title: Image-to-Image Regression with Distribution-Free Uncertainty
Quantification and Applications in Imaging
- Authors: Anastasios N Angelopoulos, Amit P Kohli, Stephen Bates, Michael I
Jordan, Jitendra Malik, Thayer Alshaabi, Srigokul Upadhyayula, and Yaniv
Romano
- Abstract summary: We show how to derive uncertainty intervals around each pixel that are guaranteed to contain the true value.
We evaluate our procedure on three image-to-image regression tasks.
- Score: 88.20869695803631
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image-to-image regression is an important learning task, used frequently in
biological imaging. Current algorithms, however, do not generally offer
statistical guarantees that protect against a model's mistakes and
hallucinations. To address this, we develop uncertainty quantification
techniques with rigorous statistical guarantees for image-to-image regression
problems. In particular, we show how to derive uncertainty intervals around
each pixel that are guaranteed to contain the true value with a user-specified
confidence probability. Our methods work in conjunction with any base machine
learning model, such as a neural network, and endow it with formal mathematical
guarantees -- regardless of the true unknown data distribution or choice of
model. Furthermore, they are simple to implement and computationally
inexpensive. We evaluate our procedure on three image-to-image regression
tasks: quantitative phase microscopy, accelerated magnetic resonance imaging,
and super-resolution transmission electron microscopy of a Drosophila
melanogaster brain.
Related papers
- RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Learned, Uncertainty-driven Adaptive Acquisition for Photon-Efficient
Multiphoton Microscopy [12.888922568191422]
We propose a method to simultaneously denoise and predict pixel-wise uncertainty for multiphoton imaging measurements.
We demonstrate our method on experimental noisy MPM measurements of human endometrium tissues.
We are the first to demonstrate distribution-free uncertainty quantification for a denoising task with real experimental data.
arXiv Detail & Related papers (2023-10-24T18:06:03Z) - USIM-DAL: Uncertainty-aware Statistical Image Modeling-based Dense
Active Learning for Super-resolution [47.38982697349244]
Dense regression is a widely used approach in computer vision for tasks such as image super-resolution, enhancement, depth estimation, etc.
We propose incorporating active learning into dense regression models to address this problem.
Active learning allows models to select the most informative samples for labeling, reducing the overall annotation cost while improving performance.
arXiv Detail & Related papers (2023-05-27T16:33:43Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Robustness via Uncertainty-aware Cycle Consistency [44.34422859532988]
Unpaired image-to-image translation refers to learning inter-image-domain mapping without corresponding image pairs.
Existing methods learn deterministic mappings without explicitly modelling the robustness to outliers or predictive uncertainty.
We propose a novel probabilistic method based on Uncertainty-aware Generalized Adaptive Cycle Consistency (UGAC)
arXiv Detail & Related papers (2021-10-24T15:33:21Z) - Uncertainty-aware GAN with Adaptive Loss for Robust MRI Image
Enhancement [3.222802562733787]
Conditional generative adversarial networks (GANs) have shown improved performance in learning photo-realistic image-to-image mappings.
This paper proposes a GAN-based framework that (i)models an adaptive loss function for robustness to OOD-noisy data and (ii)estimates the per-voxel uncertainty in the predictions.
We demonstrate our method on two key applications in medical imaging: (i)undersampled magnetic resonance imaging (MRI) reconstruction (ii)MRI modality propagation.
arXiv Detail & Related papers (2021-10-07T11:29:03Z) - Uncertainty-aware Generalized Adaptive CycleGAN [44.34422859532988]
Unpaired image-to-image translation refers to learning inter-image-domain mapping in an unsupervised manner.
Existing methods often learn deterministic mappings without explicitly modelling the robustness to outliers or predictive uncertainty.
We propose a novel probabilistic method called Uncertainty-aware Generalized Adaptive Cycle Consistency (UGAC)
arXiv Detail & Related papers (2021-02-23T15:22:35Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.