Transformer-based out-of-distribution detection for clinically safe
segmentation
- URL: http://arxiv.org/abs/2205.10650v2
- Date: Wed, 17 May 2023 21:38:23 GMT
- Title: Transformer-based out-of-distribution detection for clinically safe
segmentation
- Authors: Mark S Graham, Petru-Daniel Tudosiu, Paul Wright, Walter Hugo Lopez
Pinaya, U Jean-Marie, Yee Mah, James Teo, Rolf H J\"ager, David Werring,
Parashkev Nachev, Sebastien Ourselin, M Jorge Cardoso
- Abstract summary: In a clinical setting it is essential that deployed image processing systems do not make confidently wrong predictions.
In this work, we focus on image segmentation and evaluate several approaches to network uncertainty.
We propose performing full 3D OOD detection using a VQ-GAN to provide a compressed latent representation of the image and a transformer to estimate the data likelihood.
- Score: 1.649654992058168
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a clinical setting it is essential that deployed image processing systems
are robust to the full range of inputs they might encounter and, in particular,
do not make confidently wrong predictions. The most popular approach to safe
processing is to train networks that can provide a measure of their
uncertainty, but these tend to fail for inputs that are far outside the
training data distribution. Recently, generative modelling approaches have been
proposed as an alternative; these can quantify the likelihood of a data sample
explicitly, filtering out any out-of-distribution (OOD) samples before further
processing is performed. In this work, we focus on image segmentation and
evaluate several approaches to network uncertainty in the far-OOD and near-OOD
cases for the task of segmenting haemorrhages in head CTs. We find all of these
approaches are unsuitable for safe segmentation as they provide confidently
wrong predictions when operating OOD. We propose performing full 3D OOD
detection using a VQ-GAN to provide a compressed latent representation of the
image and a transformer to estimate the data likelihood. Our approach
successfully identifies images in both the far- and near-OOD cases. We find a
strong relationship between image likelihood and the quality of a model's
segmentation, making this approach viable for filtering images unsuitable for
segmentation. To our knowledge, this is the first time transformers have been
applied to perform OOD detection on 3D image data. Code is available at
github.com/marksgraham/transformer-ood.
Related papers
- NODI: Out-Of-Distribution Detection with Noise from Diffusion [45.68745522344308]
Out-of-distribution (OOD) detection is a crucial part of deploying machine learning models safely.
Previous methods compute the OOD scores with limited usage of the in-distribution dataset.
A $3.5%$ performance gain is achieved with the MAE-based image encoder.
arXiv Detail & Related papers (2024-01-13T08:30:13Z) - Improving Adversarial Robustness of Masked Autoencoders via Test-time
Frequency-domain Prompting [133.55037976429088]
We investigate the adversarial robustness of vision transformers equipped with BERT pretraining (e.g., BEiT, MAE)
A surprising observation is that MAE has significantly worse adversarial robustness than other BERT pretraining methods.
We propose a simple yet effective way to boost the adversarial robustness of MAE.
arXiv Detail & Related papers (2023-08-20T16:27:17Z) - Laplacian Segmentation Networks Improve Epistemic Uncertainty Quantification [21.154979285736268]
Image segmentation relies heavily on neural networks which are known to be overconfident.
This is a common scenario in the medical domain due to variations in equipment, acquisition sites, or image corruptions.
We propose Laplacian Networks (LSN): methods which jointly model (model) and aleatoric (data) uncertainty for OOD detection.
arXiv Detail & Related papers (2023-03-23T09:23:57Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - Solving Sample-Level Out-of-Distribution Detection on 3D Medical Images [0.06117371161379209]
Out-of-distribution (OOD) detection helps to identify data samples, increasing the model's reliability.
Recent works have developed DL-based OOD detection that achieves promising results on 2D medical images.
However, scaling most of these approaches on 3D images is computationally intractable.
We propose a histogram-based method that requires no DL and achieves almost perfect results in this domain.
arXiv Detail & Related papers (2022-12-13T11:42:23Z) - A Simple Test-Time Method for Out-of-Distribution Detection [45.11199798139358]
This paper proposes a simple Test-time Linear Training (ETLT) method for OOD detection.
We find that the probabilities of input images being out-of-distribution are surprisingly linearly correlated to the features extracted by neural networks.
We propose an online variant of the proposed method, which achieves promising performance and is more practical in real-world applications.
arXiv Detail & Related papers (2022-07-17T16:02:58Z) - PDC-Net+: Enhanced Probabilistic Dense Correspondence Network [161.76275845530964]
Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, capable of estimating accurate dense correspondences.
We develop an architecture and an enhanced training strategy tailored for robust and generalizable uncertainty prediction.
Our approach obtains state-of-the-art results on multiple challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-09-28T17:56:41Z) - OODformer: Out-Of-Distribution Detection Transformer [15.17006322500865]
In real-world safety-critical applications, it is important to be aware if a new data point is OOD.
This paper proposes a first-of-its-kind OOD detection architecture named OODformer.
arXiv Detail & Related papers (2021-07-19T15:46:38Z) - Scene Uncertainty and the Wellington Posterior of Deterministic Image
Classifiers [68.9065881270224]
We introduce the Wellington Posterior, which is the distribution of outcomes that would have been obtained in response to data that could have been generated by the same scene.
We explore the use of data augmentation, dropout, ensembling, single-view reconstruction, and model linearization to compute a Wellington Posterior.
Additional methods include the use of conditional generative models such as generative adversarial networks, neural radiance fields, and conditional prior networks.
arXiv Detail & Related papers (2021-06-25T20:10:00Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - RAIN: A Simple Approach for Robust and Accurate Image Classification
Networks [156.09526491791772]
It has been shown that the majority of existing adversarial defense methods achieve robustness at the cost of sacrificing prediction accuracy.
This paper proposes a novel preprocessing framework, which we term Robust and Accurate Image classificatioN(RAIN)
RAIN applies randomization over inputs to break the ties between the model forward prediction path and the backward gradient path, thus improving the model robustness.
We conduct extensive experiments on the STL10 and ImageNet datasets to verify the effectiveness of RAIN against various types of adversarial attacks.
arXiv Detail & Related papers (2020-04-24T02:03:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.