Domain-Specific Pre-training Improves Confidence in Whole Slide Image
Classification
- URL: http://arxiv.org/abs/2302.09833v2
- Date: Wed, 3 May 2023 20:03:53 GMT
- Title: Domain-Specific Pre-training Improves Confidence in Whole Slide Image
Classification
- Authors: Soham Rohit Chitnis, Sidong Liu, Tirtharaj Dash, Tanmay Tulsidas
Verlekar, Antonio Di Ieva, Shlomo Berkovsky, Lovekesh Vig, Ashwin Srinivasan
- Abstract summary: Whole Slide Images (WSIs) or histopathology images are used in digital pathology.
WSIs pose great challenges to deep learning models for clinical diagnosis.
- Score: 15.354256205808273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Whole Slide Images (WSIs) or histopathology images are used in digital
pathology. WSIs pose great challenges to deep learning models for clinical
diagnosis, owing to their size and lack of pixel-level annotations. With the
recent advancements in computational pathology, newer multiple-instance
learning-based models have been proposed. Multiple-instance learning for WSIs
necessitates creating patches and uses the encoding of these patches for
diagnosis. These models use generic pre-trained models (ResNet-50 pre-trained
on ImageNet) for patch encoding. The recently proposed KimiaNet, a DenseNet121
model pre-trained on TCGA slides, is a domain-specific pre-trained model. This
paper shows the effect of domain-specific pre-training on WSI classification.
To investigate the effect of domain-specific pre-training, we considered the
current state-of-the-art multiple-instance learning models, 1) CLAM, an
attention-based model, and 2) TransMIL, a self-attention-based model, and
evaluated the models' confidence and predictive performance in detecting
primary brain tumors - gliomas. Domain-specific pre-training improves the
confidence of the models and also achieves a new state-of-the-art performance
of WSI-based glioma subtype classification, showing a high clinical
applicability in assisting glioma diagnosis. We will publicly share our code
and experimental results at
https://github.com/soham-chitnis10/WSI-domain-specific.
Related papers
- Foundation Models for Slide-level Cancer Subtyping in Digital Pathology [1.7641392161755438]
This work aims to compare the performance of various feature extractors developed under different pretraining strategies for cancer subtyping on WSI under a MIL framework.
Results demonstrate the ability of foundation models to surpass ImageNet-pretrained models for the prediction of six skin cancer subtypes.
arXiv Detail & Related papers (2024-10-21T11:04:58Z) - Benchmarking Embedding Aggregation Methods in Computational Pathology: A Clinical Data Perspective [32.93871326428446]
Recent advances in artificial intelligence (AI) are revolutionizing medical imaging and computational pathology.
A constant challenge in the analysis of digital Whole Slide Images (WSIs) is the problem of aggregating tens of thousands of tile-level image embeddings to a slide-level representation.
This study conducts a benchmarking analysis of ten slide-level aggregation techniques across nine clinically relevant tasks.
arXiv Detail & Related papers (2024-07-10T17:00:57Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Prompt-Guided Adaptive Model Transformation for Whole Slide Image Classification [27.21493446754789]
Multiple instance learning (MIL) has emerged as a popular method for classifying histopathology whole slide images (WSIs)
We propose Prompt-guided Adaptive Model Transformation framework that seamlessly adapts pre-trained models to the specific characteristics of histopathology data.
We rigorously evaluate our approach on two datasets, Camelyon16 and TCGA-NSCLC, showcasing substantial improvements across various MIL models.
arXiv Detail & Related papers (2024-03-19T08:23:12Z) - PathoDuet: Foundation Models for Pathological Slide Analysis of H&E and IHC Stains [5.422494000842841]
We present PathoDuet, a series of pretrained models on histopathology images, and a new self-supervised learning framework in histochemistry.
The framework is featured by a newly-introduced pretext token and later task raisers to explicitly utilize certain relations between images.
Two pretext tasks, cross-scale positioning and cross-stain transferring, are designed to pretrain the model on Hematoxylin and Eosin images.
arXiv Detail & Related papers (2023-12-15T15:45:52Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Stacking Ensemble Learning in Deep Domain Adaptation for Ophthalmic
Image Classification [61.656149405657246]
Domain adaptation is effective in image classification tasks where obtaining sufficient label data is challenging.
We propose a novel method, named SELDA, for stacking ensemble learning via extending three domain adaptation methods.
The experimental results using Age-Related Eye Disease Study (AREDS) benchmark ophthalmic dataset demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2022-09-27T14:19:00Z) - From Modern CNNs to Vision Transformers: Assessing the Performance,
Robustness, and Classification Strategies of Deep Learning Models in
Histopathology [1.8947504307591034]
We develop a new methodology to extensively evaluate a wide range of classification models.
We thoroughly tested the models on five widely used histopathology datasets.
We extend existing interpretability methods and systematically reveal insights of the models' classifications strategies.
arXiv Detail & Related papers (2022-04-11T12:26:19Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.