Robustness Testing of Black-Box Models Against CT Degradation Through Test-Time Augmentation
- URL: http://arxiv.org/abs/2406.19557v1
- Date: Thu, 27 Jun 2024 22:17:49 GMT
- Title: Robustness Testing of Black-Box Models Against CT Degradation Through Test-Time Augmentation
- Authors: Jack Highton, Quok Zong Chong, Samuel Finestone, Arian Beqiri, Julia A. Schnabel, Kanwal K. Bhatia,
- Abstract summary: Deep learning models for medical image segmentation and object detection are becoming increasingly available as clinical products.
As details are rarely provided about the training data, models may unexpectedly fail when cases differ from those in the training distribution.
A method to test the robustness of these models against CT image quality variation is presented.
- Score: 1.7788343872869767
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models for medical image segmentation and object detection are becoming increasingly available as clinical products. However, as details are rarely provided about the training data, models may unexpectedly fail when cases differ from those in the training distribution. An approach allowing potential users to independently test the robustness of a model, treating it as a black box and using only a few cases from their own site, is key for adoption. To address this, a method to test the robustness of these models against CT image quality variation is presented. In this work we present this framework by demonstrating that given the same training data, the model architecture and data pre processing greatly affect the robustness of several frequently used segmentation and object detection methods to simulated CT imaging artifacts and degradation. Our framework also addresses the concern about the sustainability of deep learning models in clinical use, by considering future shifts in image quality due to scanner deterioration or imaging protocol changes which are not reflected in a limited local test dataset.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Unsupervised Contrastive Analysis for Salient Pattern Detection using Conditional Diffusion Models [13.970483987621135]
Contrastive Analysis (CA) aims to identify patterns in images that allow distinguishing between a background (BG) dataset and a target (TG) dataset (i.e. unhealthy subjects)
Recent works on this topic rely on variational autoencoders (VAE) or contrastive learning strategies to learn the patterns that separate TG samples from BG samples in a supervised manner.
We employ a self-supervised contrastive encoder to learn a latent representation encoding only common patterns from input images, using samples exclusively from the BG dataset during training, and approximating the distribution of the target patterns by leveraging data augmentation techniques.
arXiv Detail & Related papers (2024-06-02T15:19:07Z) - Diffusion Model Driven Test-Time Image Adaptation for Robust Skin Lesion Classification [24.08402880603475]
We propose a test-time image adaptation method to enhance the accuracy of the model on test data.
We modify the target test images by projecting them back to the source domain using a diffusion model.
Our method makes the robustness more robust across various corruptions, architectures, and data regimes.
arXiv Detail & Related papers (2024-05-18T13:28:51Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - An interpretable deep learning method for bearing fault diagnosis [12.069344716912843]
We utilize a convolutional neural network (CNN) with Gradient-weighted Class Activation Mapping (Grad-CAM) visualizations to form an interpretable Deep Learning (DL) method for classifying bearing faults.
During the model evaluation process, the proposed approach retrieves prediction basis samples from the health library according to the similarity of the feature importance.
arXiv Detail & Related papers (2023-08-20T15:22:08Z) - ADASSM: Adversarial Data Augmentation in Statistical Shape Models From
Images [0.8192907805418583]
This paper introduces a novel strategy for on-the-fly data augmentation for the Image-to-SSM framework by leveraging data-dependent noise generation or texture augmentation.
Our approach achieves improved accuracy by encouraging the model to focus on the underlying geometry rather than relying solely on pixel values.
arXiv Detail & Related papers (2023-07-06T20:21:12Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - DOMINO: Domain-aware Model Calibration in Medical Image Segmentation [51.346121016559024]
Modern deep neural networks are poorly calibrated, compromising trustworthiness and reliability.
We propose DOMINO, a domain-aware model calibration method that leverages the semantic confusability and hierarchical similarity between class labels.
Our results show that DOMINO-calibrated deep neural networks outperform non-calibrated models and state-of-the-art morphometric methods in head image segmentation.
arXiv Detail & Related papers (2022-09-13T15:31:52Z) - Application of Homomorphic Encryption in Medical Imaging [60.51436886110803]
We show how HE can be used to make predictions over medical images while preventing unauthorized secondary use of data.
We report some experiments using 3D chest CT-Scans for a nodule detection task.
arXiv Detail & Related papers (2021-10-12T19:57:12Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Improving Calibration and Out-of-Distribution Detection in Medical Image
Segmentation with Convolutional Neural Networks [8.219843232619551]
Convolutional Neural Networks (CNNs) have shown to be powerful medical image segmentation models.
We advocate for multi-task learning, i.e., training a single model on several different datasets.
We show that not only a single CNN learns to automatically recognize the context and accurately segment the organ of interest in each context, but also that such a joint model often has more accurate and better-calibrated predictions.
arXiv Detail & Related papers (2020-04-12T23:42:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.