The Impact of Scanner Domain Shift on Deep Learning Performance in Medical Imaging: an Experimental Study
- URL: http://arxiv.org/abs/2409.04368v2
- Date: Wed, 2 Oct 2024 13:51:37 GMT
- Title: The Impact of Scanner Domain Shift on Deep Learning Performance in Medical Imaging: an Experimental Study
- Authors: Brian Guo, Darui Lu, Gregory Szumel, Rongze Gui, Tingyu Wang, Nicholas Konz, Maciej A. Mazurowski,
- Abstract summary: We evaluate the impact of scanner domain shift on convolutional neural network performance for different automated diagnostic tasks.
We find that network performance on data from a different scanner is almost always worse than on same-scanner data.
We attribute this drop to the standardized nature of CT acquisition systems which is not present in MRI or X-ray.
- Score: 1.4628485867112924
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Purpose: Medical images acquired using different scanners and protocols can differ substantially in their appearance. This phenomenon, scanner domain shift, can result in a drop in the performance of deep neural networks which are trained on data acquired by one scanner and tested on another. This significant practical issue is well-acknowledged, however, no systematic study of the issue is available across different modalities and diagnostic tasks. Materials and Methods: In this paper, we present a broad experimental study evaluating the impact of scanner domain shift on convolutional neural network performance for different automated diagnostic tasks. We evaluate this phenomenon in common radiological modalities, including X-ray, CT, and MRI. Results: We find that network performance on data from a different scanner is almost always worse than on same-scanner data, and we quantify the degree of performance drop across different datasets. Notably, we find that this drop is most severe for MRI, moderate for X-ray, and quite small for CT, on average, which we attribute to the standardized nature of CT acquisition systems which is not present in MRI or X-ray. We also study how injecting varying amounts of target domain data into the training set, as well as adding noise to the training data, helps with generalization. Conclusion: Our results provide extensive experimental evidence and quantification of the extent of performance drop caused by scanner domain shift in deep learning across different modalities, with the goal of guiding the future development of robust deep learning models for medical image analysis.
Related papers
- Pathology Foundation Models are Scanner Sensitive: Benchmark and Mitigation with Contrastive ScanGen Loss [6.310092608526967]
We show that Foundation Models (FMs) still suffer from scanner bias.<n>We propose ScanGen, a contrastive loss function applied during task-specific fine-tuning that mitigates scanner bias.<n>Our approach is applied to the Multiple Instance Learning task of Epidermal Growth Factor Receptor (EGFR) mutation prediction from H&E-stained WSIs in lung cancer.
arXiv Detail & Related papers (2025-07-29T12:35:08Z) - Evaluation of Vision Transformers for Multimodal Image Classification: A Case Study on Brain, Lung, and Kidney Tumors [0.0]
This work evaluates the performance of Vision Transformers architectures, including Swin Transformer and MaxViT, in several datasets.
We used three training sets of images with brain, lung, and kidney tumors.
Swin Transformer provided high accuracy, achieving up to 99.9% for kidney tumor classification and 99.3% accuracy in a combined dataset.
arXiv Detail & Related papers (2025-02-08T10:35:51Z) - Disease Classification and Impact of Pretrained Deep Convolution Neural Networks on Diverse Medical Imaging Datasets across Imaging Modalities [0.0]
This paper investigates the intricacies of using pretrained deep convolutional neural networks with transfer learning across diverse medical imaging datasets.
It shows that the use of pretrained models as fixed feature extractors yields poor performance irrespective of the datasets.
It is also found that deeper and more complex architectures did not necessarily result in the best performance.
arXiv Detail & Related papers (2024-08-30T04:51:19Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - FAST-AID Brain: Fast and Accurate Segmentation Tool using Artificial
Intelligence Developed for Brain [0.8376091455761259]
A novel deep learning method is proposed for fast and accurate segmentation of the human brain into 132 regions.
The proposed model uses an efficient U-Net-like network and benefits from the intersection points of different views and hierarchical relations.
The proposed method can be applied to brain MRI data including skull or any other artifacts without preprocessing the images or a drop in performance.
arXiv Detail & Related papers (2022-08-30T16:06:07Z) - PrepNet: A Convolutional Auto-Encoder to Homogenize CT Scans for
Cross-Dataset Medical Image Analysis [0.22485007639406518]
COVID-19 diagnosis can now be done efficiently using PCR tests, but this use case exemplifies the need for a methodology to overcome data variability issues.
We propose a novel generative approach that aims at erasing the differences induced by e.g. the imaging technology while simultaneously introducing minimal changes to the CT scans.
arXiv Detail & Related papers (2022-08-19T15:49:47Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Learn to Ignore: Domain Adaptation for Multi-Site MRI Analysis [1.3079444139643956]
We present a novel method that learns to ignore the scanner-related features present in the images, while learning features relevant for the classification task.
Our method outperforms state-of-the-art domain adaptation methods on a classification task between Multiple Sclerosis patients and healthy subjects.
arXiv Detail & Related papers (2021-10-13T15:40:50Z) - Deep Transfer Learning for Brain Magnetic Resonance Image Multi-class
Classification [0.6117371161379209]
We have developed a framework that uses Deep Transfer Learning to perform a multi-classification of tumors in the brain MRI images.
Using the novel dataset and two publicly available MRI brain datasets, this proposed approach attained a classification accuracy of 86.40%.
Results of our experiments significantly demonstrate our proposed framework for transfer learning is a potential and effective method for brain tumor multi-classification tasks.
arXiv Detail & Related papers (2021-06-14T12:19:27Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z) - Domain Shift in Computer Vision models for MRI data analysis: An
Overview [64.69150970967524]
Machine learning and computer vision methods are showing good performance in medical imagery analysis.
Yet only a few applications are now in clinical use.
Poor transferability of themodels to data from different sources or acquisition domains is one of the reasons for that.
arXiv Detail & Related papers (2020-10-14T16:34:21Z) - Deep Mining External Imperfect Data for Chest X-ray Disease Screening [57.40329813850719]
We argue that incorporating an external CXR dataset leads to imperfect training data, which raises the challenges.
We formulate the multi-label disease classification problem as weighted independent binary tasks according to the categories.
Our framework simultaneously models and tackles the domain and label discrepancies, enabling superior knowledge mining ability.
arXiv Detail & Related papers (2020-06-06T06:48:40Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.