Acquisition-invariant brain MRI segmentation with informative
uncertainties
- URL: http://arxiv.org/abs/2111.04094v1
- Date: Sun, 7 Nov 2021 13:58:04 GMT
- Title: Acquisition-invariant brain MRI segmentation with informative
uncertainties
- Authors: Pedro Borges, Richard Shaw, Thomas Varsavsky, Kerstin Klaser, David
Thomas, Ivana Drobnjak, Sebastien Ourselin, M Jorge Cardoso
- Abstract summary: Post-hoc multi-site correction methods exist but have strong assumptions that often do not hold in real-world scenarios.
This body of work showcases such an algorithm, that can become robust to the physics of acquisition in the context of segmentation tasks.
We demonstrate that our method not only generalises to complete holdout datasets, preserving segmentation quality, but does so while also accounting for site-specific sequence choices.
- Score: 3.46329153611365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combining multi-site data can strengthen and uncover trends, but is a task
that is marred by the influence of site-specific covariates that can bias the
data and therefore any downstream analyses. Post-hoc multi-site correction
methods exist but have strong assumptions that often do not hold in real-world
scenarios. Algorithms should be designed in a way that can account for
site-specific effects, such as those that arise from sequence parameter
choices, and in instances where generalisation fails, should be able to
identify such a failure by means of explicit uncertainty modelling. This body
of work showcases such an algorithm, that can become robust to the physics of
acquisition in the context of segmentation tasks, while simultaneously
modelling uncertainty. We demonstrate that our method not only generalises to
complete holdout datasets, preserving segmentation quality, but does so while
also accounting for site-specific sequence choices, which also allows it to
perform as a harmonisation tool.
Related papers
- Detecting and Identifying Selection Structure in Sequential Data [53.24493902162797]
We argue that the selective inclusion of data points based on latent objectives is common in practical situations, such as music sequences.
We show that selection structure is identifiable without any parametric assumptions or interventional experiments.
We also propose a provably correct algorithm to detect and identify selection structures as well as other types of dependencies.
arXiv Detail & Related papers (2024-06-29T20:56:34Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Uncertainty estimation in Deep Learning for Panoptic segmentation [0.46040036610482665]
We show how ensemble-based uncertainty estimation approaches can be used in the panoptic segmentation domain.
Results are demonstrated on the COCO, KITTI-STEP and VIPER datasets.
arXiv Detail & Related papers (2023-04-04T19:54:35Z) - Self-Supervised Likelihood Estimation with Energy Guidance for Anomaly
Segmentation in Urban Scenes [42.66864386405585]
We design an energy-guided self-supervised framework for anomaly segmentation.
We exploit the strong context-dependent nature of the segmentation task.
Based on the proposed estimators, we devise an adaptive self-supervised training framework.
arXiv Detail & Related papers (2023-02-14T03:54:32Z) - Modeling Multimodal Aleatoric Uncertainty in Segmentation with Mixture
of Stochastic Expert [24.216869988183092]
We focus on capturing the data-inherent uncertainty (aka aleatoric uncertainty) in segmentation, typically when ambiguities exist in input images.
We propose a novel mixture of experts (MoSE) model, where each expert network estimates a distinct mode of aleatoric uncertainty.
We develop a Wasserstein-like loss that directly minimizes the distribution distance between the MoSE and ground truth annotations.
arXiv Detail & Related papers (2022-12-14T16:48:21Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Probabilistic Deep Learning for Instance Segmentation [9.62543698736491]
We propose a generic method to obtain model-inherent uncertainty estimates within proposal-free instance segmentation models.
We evaluate our method on the BBBC010 C. elegans dataset, where it yields competitive performance.
arXiv Detail & Related papers (2020-08-24T19:51:48Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.