Quantifying the Scanner-Induced Domain Gap in Mitosis Detection
- URL: http://arxiv.org/abs/2103.16515v1
- Date: Tue, 30 Mar 2021 17:17:25 GMT
- Title: Quantifying the Scanner-Induced Domain Gap in Mitosis Detection
- Authors: Marc Aubreville, Christof Bertram, Mitko Veta, Robert Klopfleisch,
Nikolas Stathonikos, Katharina Breininger, Natalie ter Hoeve, Francesco
Ciompi, and Andreas Maier
- Abstract summary: We evaluate the susceptibility of a standard mitosis detection approach to the domain shift introduced by using a different whole slide scanner.
Our work indicates that the domain shift induced not by biochemical variability but purely by the choice of acquisition device is underestimated.
- Score: 8.09551131543818
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated detection of mitotic figures in histopathology images has seen vast
improvements, thanks to modern deep learning-based pipelines. Application of
these methods, however, is in practice limited by strong variability of images
between labs. This results in a domain shift of the images, which causes a
performance drop of the models. Hypothesizing that the scanner device plays a
decisive role in this effect, we evaluated the susceptibility of a standard
mitosis detection approach to the domain shift introduced by using a different
whole slide scanner. Our work is based on the MICCAI-MIDOG challenge 2021 data
set, which includes 200 tumor cases of human breast cancer and four scanners.
Our work indicates that the domain shift induced not by biochemical
variability but purely by the choice of acquisition device is underestimated so
far. Models trained on images of the same scanner yielded an average F1 score
of 0.683, while models trained on a single other scanner only yielded an
average F1 score of 0.325. Training on another multi-domain mitosis dataset led
to mean F1 scores of 0.52. We found this not to be reflected by domain-shifts
measured as proxy A distance-derived metric.
Related papers
- Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Multi-Scanner Canine Cutaneous Squamous Cell Carcinoma Histopathology
Dataset [6.309771474997404]
In histopathology, scanner-induced domain shifts are known to impede the performance of trained neural networks when tested on unseen data.
We present a publicly available multi-scanner dataset of canine cutaneous squamous cell carcinoma histopathology images.
arXiv Detail & Related papers (2023-01-11T12:02:10Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - MitoDet: Simple and robust mitosis detection [0.31498833540989407]
An important source of a domain shift is introduced by different microscopes and their camera systems, which noticeably change the color representation of digitized images.
We present our submitted algorithm for the Mitosis Domain Generalization Challenge, which employs a RetinaNet trained with strong data augmentation and achieves an F1 score of 0.7138 on the preliminary test set.
arXiv Detail & Related papers (2021-09-02T17:19:08Z) - Assessing domain adaptation techniques for mitosis detection in
multi-scanner breast cancer histopathology images [0.6999740786886536]
We train two mitosis detection models and two style transfer methods and evaluate the usefulness of the latter for improving mitosis detection performance.
The best of these models, U-Net without style transfer, achieved an F1-score of 0.693 on the MIDOG 2021 preliminary test set.
arXiv Detail & Related papers (2021-09-01T16:27:46Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Zero-Shot Domain Adaptation in CT Segmentation by Filtered Back
Projection Augmentation [0.1197985185770095]
Domain shift is one of the most salient challenges in medical computer vision.
We address variability in computed tomography (CT) images caused by different convolution kernels used in the reconstruction process.
We propose Filtered Back-Projection Augmentation (FBPAug), a simple and surprisingly efficient approach to augment CT images in sinogram space emulating reconstruction with different kernels.
arXiv Detail & Related papers (2021-07-18T21:46:49Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z) - Improved inter-scanner MS lesion segmentation by adversarial training on
longitudinal data [0.0]
The evaluation of white matter lesion progression is an important biomarker in the follow-up of MS patients.
Current automated lesion segmentation algorithms are susceptible to variability in image characteristics related to MRI scanner or protocol differences.
We propose a model that improves the consistency of MS lesion segmentations in inter-scanner studies.
arXiv Detail & Related papers (2020-02-03T16:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.