Robustness via Uncertainty-aware Cycle Consistency
- URL: http://arxiv.org/abs/2110.12467v1
- Date: Sun, 24 Oct 2021 15:33:21 GMT
- Title: Robustness via Uncertainty-aware Cycle Consistency
- Authors: Uddeshya Upadhyay, Yanbei Chen, Zeynep Akata
- Abstract summary: Unpaired image-to-image translation refers to learning inter-image-domain mapping without corresponding image pairs.
Existing methods learn deterministic mappings without explicitly modelling the robustness to outliers or predictive uncertainty.
We propose a novel probabilistic method based on Uncertainty-aware Generalized Adaptive Cycle Consistency (UGAC)
- Score: 44.34422859532988
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unpaired image-to-image translation refers to learning inter-image-domain
mapping without corresponding image pairs. Existing methods learn deterministic
mappings without explicitly modelling the robustness to outliers or predictive
uncertainty, leading to performance degradation when encountering unseen
perturbations at test time. To address this, we propose a novel probabilistic
method based on Uncertainty-aware Generalized Adaptive Cycle Consistency
(UGAC), which models the per-pixel residual by generalized Gaussian
distribution, capable of modelling heavy-tailed distributions. We compare our
model with a wide variety of state-of-the-art methods on various challenging
tasks including unpaired image translation of natural images, using standard
datasets, spanning autonomous driving, maps, facades, and also in medical
imaging domain consisting of MRI. Experimental results demonstrate that our
method exhibits stronger robustness towards unseen perturbations in test data.
Code is released here:
https://github.com/ExplainableML/UncertaintyAwareCycleConsistency.
Related papers
- Confidence-Aware and Self-Supervised Image Anomaly Localisation [7.099105239108548]
We discuss an improved self-supervised single-class training strategy that supports the approximation of probabilistic inference with loosen feature locality constraints.
Our method is integrated into several out-of-distribution (OOD) detection models and we show evidence that our method outperforms the state-of-the-art on various benchmark datasets.
arXiv Detail & Related papers (2023-03-23T12:48:47Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - Modeling Multimodal Aleatoric Uncertainty in Segmentation with Mixture
of Stochastic Expert [24.216869988183092]
We focus on capturing the data-inherent uncertainty (aka aleatoric uncertainty) in segmentation, typically when ambiguities exist in input images.
We propose a novel mixture of experts (MoSE) model, where each expert network estimates a distinct mode of aleatoric uncertainty.
We develop a Wasserstein-like loss that directly minimizes the distribution distance between the MoSE and ground truth annotations.
arXiv Detail & Related papers (2022-12-14T16:48:21Z) - Image-to-Image Regression with Distribution-Free Uncertainty
Quantification and Applications in Imaging [88.20869695803631]
We show how to derive uncertainty intervals around each pixel that are guaranteed to contain the true value.
We evaluate our procedure on three image-to-image regression tasks.
arXiv Detail & Related papers (2022-02-10T18:59:56Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Uncertainty-aware GAN with Adaptive Loss for Robust MRI Image
Enhancement [3.222802562733787]
Conditional generative adversarial networks (GANs) have shown improved performance in learning photo-realistic image-to-image mappings.
This paper proposes a GAN-based framework that (i)models an adaptive loss function for robustness to OOD-noisy data and (ii)estimates the per-voxel uncertainty in the predictions.
We demonstrate our method on two key applications in medical imaging: (i)undersampled magnetic resonance imaging (MRI) reconstruction (ii)MRI modality propagation.
arXiv Detail & Related papers (2021-10-07T11:29:03Z) - Uncertainty-aware Generalized Adaptive CycleGAN [44.34422859532988]
Unpaired image-to-image translation refers to learning inter-image-domain mapping in an unsupervised manner.
Existing methods often learn deterministic mappings without explicitly modelling the robustness to outliers or predictive uncertainty.
We propose a novel probabilistic method called Uncertainty-aware Generalized Adaptive Cycle Consistency (UGAC)
arXiv Detail & Related papers (2021-02-23T15:22:35Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.