Are nuclear masks all you need for improved out-of-domain generalisation? A closer look at cancer classification in histopathology
- URL: http://arxiv.org/abs/2411.09373v1
- Date: Thu, 14 Nov 2024 11:27:15 GMT
- Title: Are nuclear masks all you need for improved out-of-domain generalisation? A closer look at cancer classification in histopathology
- Authors: Dhananjay Tomar, Alexander Binder, Andreas Kleppe,
- Abstract summary: We propose a simple approach to improve OOD generalisation for cancer detection by focusing on nuclear morphology and organisation.
Our approach integrates original images with nuclear segmentation masks during training, encouraging the model to prioritise nuclei.
We show, using multiple datasets, that our method improves OOD generalisation and also leads to increased robustness to image corruptions and adversarial attacks.
- Score: 49.518701946822446
- License:
- Abstract: Domain generalisation in computational histopathology is challenging because the images are substantially affected by differences among hospitals due to factors like fixation and staining of tissue and imaging equipment. We hypothesise that focusing on nuclei can improve the out-of-domain (OOD) generalisation in cancer detection. We propose a simple approach to improve OOD generalisation for cancer detection by focusing on nuclear morphology and organisation, as these are domain-invariant features critical in cancer detection. Our approach integrates original images with nuclear segmentation masks during training, encouraging the model to prioritise nuclei and their spatial arrangement. Going beyond mere data augmentation, we introduce a regularisation technique that aligns the representations of masks and original images. We show, using multiple datasets, that our method improves OOD generalisation and also leads to increased robustness to image corruptions and adversarial attacks. The source code is available at https://github.com/undercutspiky/SFL/
Related papers
- View it like a radiologist: Shifted windows for deep learning
augmentation of CT images [11.902593645631034]
We propose a novel preprocessing and intensity augmentation scheme inspired by how radiologists leverage multiple viewing windows when evaluating CT images.
Our proposed method, window shifting, randomly places the viewing windows around the region of interest during training.
This approach improves liver lesion segmentation performance and robustness on images with poorly timed contrast agent.
arXiv Detail & Related papers (2023-11-25T10:28:08Z) - Attention-Map Augmentation for Hypercomplex Breast Cancer Classification [6.098816895102301]
We propose a framework, parameterized hypercomplex attention maps (PHAM), to overcome problems with breast cancer classification.
The framework offers two main advantages. First, attention maps provide critical information regarding the ROI and allow the neural model to concentrate on it.
We surpass attention-based state-of-the-art networks and the real-valued counterpart of our approach.
arXiv Detail & Related papers (2023-10-11T16:28:24Z) - Breast Cancer Segmentation using Attention-based Convolutional Network
and Explainable AI [0.0]
Breast cancer (BC) remains a significant health threat, with no long-term cure currently available.
Early detection is crucial, yet mammography interpretation is hindered by high false positives and negatives.
This work presents an attention-based convolutional neural network for segmentation, providing increased speed and precision in BC detection and classification.
arXiv Detail & Related papers (2023-05-22T20:49:20Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Radiomics-Guided Global-Local Transformer for Weakly Supervised
Pathology Localization in Chest X-Rays [65.88435151891369]
Radiomics-Guided Transformer (RGT) fuses textitglobal image information with textitlocal knowledge-guided radiomics information.
RGT consists of an image Transformer branch, a radiomics Transformer branch, and fusion layers that aggregate image and radiomic information.
arXiv Detail & Related papers (2022-07-10T06:32:56Z) - MaNi: Maximizing Mutual Information for Nuclei Cross-Domain Unsupervised
Segmentation [9.227037203895533]
We propose a mutual information (MI) based unsupervised domain adaptation (UDA) method for the cross-domain nuclei segmentation.
Nuclei vary substantially in structure and appearances across different cancer types, leading to a drop in performance of deep learning models when trained on one cancer type and tested on another.
arXiv Detail & Related papers (2022-06-29T07:24:02Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - SDCT-AuxNet$^{\theta}$: DCT Augmented Stain Deconvolutional CNN with
Auxiliary Classifier for Cancer Diagnosis [14.567067583556714]
Acute lymphoblastic leukemia (ALL) is a pervasive pediatric white blood cell cancer across the globe.
This paper presents a novel deep learning architecture for the classification of cell images of ALL cancer.
Elaborate experiments have been carried out on our recently released public dataset of 15114 images of ALL cancer and healthy cells.
arXiv Detail & Related papers (2020-05-30T16:01:31Z) - Towards a Complete Pipeline for Segmenting Nuclei in Feulgen-Stained
Images [52.946144307741974]
Cervical cancer is the second most common cancer type in women around the world.
We present a complete pipeline for the segmentation of nuclei in Feulgen-stained images using Convolutional Neural Networks.
We achieved an overall IoU of 0.78, showing the affordability of the approach of nuclei segmentation on Feulgen-stained images.
arXiv Detail & Related papers (2020-02-19T18:14:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.