Detecting when pre-trained nnU-Net models fail silently for Covid-19
lung lesion segmentation
- URL: http://arxiv.org/abs/2107.05975v2
- Date: Wed, 14 Jul 2021 11:45:47 GMT
- Title: Detecting when pre-trained nnU-Net models fail silently for Covid-19
lung lesion segmentation
- Authors: Camila Gonzalez, Karol Gotkowski, Andreas Bucher, Ricarda Fischbach,
Isabel Kaltenborn, Anirban Mukhopadhyay
- Abstract summary: We propose a lightweight OOD detection method that exploits the Mahalanobis distance in the feature space.
We validate our method with a patch-based nnU-Net architecture trained with a multi-institutional dataset.
- Score: 0.34940201626430645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic segmentation of lung lesions in computer tomography has the
potential to ease the burden of clinicians during the Covid-19 pandemic. Yet
predictive deep learning models are not trusted in the clinical routine due to
failing silently in out-of-distribution (OOD) data. We propose a lightweight
OOD detection method that exploits the Mahalanobis distance in the feature
space. The proposed approach can be seamlessly integrated into state-of-the-art
segmentation pipelines without requiring changes in model architecture or
training procedure, and can therefore be used to assess the suitability of
pre-trained models to new data. We validate our method with a patch-based
nnU-Net architecture trained with a multi-institutional dataset and find that
it effectively detects samples that the model segments incorrectly.
Related papers
- Weakly supervised deep learning model with size constraint for prostate cancer detection in multiparametric MRI and generalization to unseen domains [0.90668179713299]
We show that the model achieves on-par performance with strong fully supervised baseline models.
We also observe a performance decrease for both fully supervised and weakly supervised models when tested on unseen data domains.
arXiv Detail & Related papers (2024-11-04T12:24:33Z) - An interpretable deep learning method for bearing fault diagnosis [12.069344716912843]
We utilize a convolutional neural network (CNN) with Gradient-weighted Class Activation Mapping (Grad-CAM) visualizations to form an interpretable Deep Learning (DL) method for classifying bearing faults.
During the model evaluation process, the proposed approach retrieves prediction basis samples from the health library according to the similarity of the feature importance.
arXiv Detail & Related papers (2023-08-20T15:22:08Z) - Distance-based detection of out-of-distribution silent failures for
Covid-19 lung lesion segmentation [0.8200989595956418]
Deep learning models are not trusted in the clinical routine due to failing silently on out-of-distribution data.
We propose a lightweight OOD detection method that leverages the Mahalanobis distance in the feature space.
We validate our method across four chest CT distribution shifts and two magnetic resonance imaging applications.
arXiv Detail & Related papers (2022-08-05T15:05:23Z) - Out of Distribution Detection and Adversarial Attacks on Deep Neural
Networks for Robust Medical Image Analysis [8.985261743452988]
We experimentally evaluate the robustness of a Mahalanobis distance-based confidence score, a simple yet effective method for detecting abnormal input samples.
Results indicated that the Mahalanobis confidence score detector exhibits improved performance and robustness of deep learning models.
arXiv Detail & Related papers (2021-07-10T18:00:40Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z) - Deep k-NN for Noisy Labels [55.97221021252733]
We show that a simple $k$-nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabeled data and produce more accurate models than many recently proposed methods.
arXiv Detail & Related papers (2020-04-26T05:15:36Z) - An Adversarial Approach for the Robust Classification of Pneumonia from
Chest Radiographs [9.462808515258464]
Deep learning models often exhibit performance loss due to dataset shift.
Models trained using data from one hospital system achieve high predictive performance when tested on data from the same hospital, but perform significantly worse when tested in different hospital systems.
We propose an approach based on adversarial optimization, which allows us to learn more robust models that do not depend on confounders.
arXiv Detail & Related papers (2020-01-13T03:49:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.