Distributional Gaussian Processes Layers for Out-of-Distribution
Detection
- URL: http://arxiv.org/abs/2206.13346v1
- Date: Mon, 27 Jun 2022 14:49:48 GMT
- Title: Distributional Gaussian Processes Layers for Out-of-Distribution
Detection
- Authors: Sebastian G. Popescu, David J. Sharp, James H. Cole, Konstantinos
Kamnitsas and Ben Glocker
- Abstract summary: It is unsure whether out-of-distribution detection models reliant on deep neural networks are suitable for detecting domain shifts in medical imaging.
We propose a parameter efficient Bayesian layer for hierarchical convolutional Gaussian Processes that incorporates Gaussian Processes operating in Wasserstein-2 space.
Our uncertainty estimates result in out-of-distribution detection that outperforms the capabilities of previous Bayesian networks.
- Score: 18.05109901753853
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning models deployed on medical imaging tasks must be equipped
with out-of-distribution detection capabilities in order to avoid erroneous
predictions. It is unsure whether out-of-distribution detection models reliant
on deep neural networks are suitable for detecting domain shifts in medical
imaging. Gaussian Processes can reliably separate in-distribution data points
from out-of-distribution data points via their mathematical construction.
Hence, we propose a parameter efficient Bayesian layer for hierarchical
convolutional Gaussian Processes that incorporates Gaussian Processes operating
in Wasserstein-2 space to reliably propagate uncertainty. This directly
replaces convolving Gaussian Processes with a distance-preserving affine
operator on distributions. Our experiments on brain tissue-segmentation show
that the resulting architecture approaches the performance of well-established
deterministic segmentation algorithms (U-Net), which has not been achieved with
previous hierarchical Gaussian Processes. Moreover, by applying the same
segmentation model to out-of-distribution data (i.e., images with pathology
such as brain tumors), we show that our uncertainty estimates result in
out-of-distribution detection that outperforms the capabilities of previous
Bayesian networks and reconstruction-based approaches that learn normative
distributions. To facilitate future work our code is publicly available.
Related papers
- Sparse Variational Contaminated Noise Gaussian Process Regression with Applications in Geomagnetic Perturbations Forecasting [4.675221539472143]
We propose a scalable inference algorithm for fitting sparse Gaussian process regression models with contaminated normal noise on large datasets.
We show that our approach yields shorter prediction intervals for similar coverage and accuracy when compared to an artificial dense neural network baseline.
arXiv Detail & Related papers (2024-02-27T15:08:57Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Distributed Bayesian Estimation in Sensor Networks: Consensus on
Marginal Densities [15.038649101409804]
We derive a distributed provably-correct algorithm in the functional space of probability distributions over continuous variables.
We leverage these results to obtain new distributed estimators restricted to subsets of variables observed by individual agents.
This relates to applications such as cooperative localization and federated learning, where the data collected at any agent depends on a subset of all variables of interest.
arXiv Detail & Related papers (2023-12-02T21:10:06Z) - Distribution Shift Inversion for Out-of-Distribution Prediction [57.22301285120695]
We propose a portable Distribution Shift Inversion algorithm for Out-of-Distribution (OoD) prediction.
We show that our method provides a general performance gain when plugged into a wide range of commonly used OoD algorithms.
arXiv Detail & Related papers (2023-06-14T08:00:49Z) - Window-Based Distribution Shift Detection for Deep Neural Networks [21.73028341299301]
We study the case of monitoring the healthy operation of a deep neural network (DNN) receiving a stream of data.
Using selective prediction principles, we propose a distribution deviation detection method for DNNs.
Our novel detection method performs on-par or better than the state-of-the-art, while consuming substantially lower time.
arXiv Detail & Related papers (2022-10-19T21:27:25Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Distributional Gaussian Process Layers for Outlier Detection in Image
Segmentation [15.086527565572073]
We propose a parameter efficient Bayesian layer for hierarchical convolutional Gaussian Processes.
Our experiments on brain tissue-segmentation show that the resulting architecture approaches the performance of well-established deterministic segmentation algorithms.
Our uncertainty estimates result in out-of-distribution detection that outperforms the capabilities of previous Bayesian networks.
arXiv Detail & Related papers (2021-04-28T13:37:10Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.