Semantic Segmentation Based Quality Control of Histopathology Whole Slide Images
- URL: http://arxiv.org/abs/2410.03289v1
- Date: Fri, 4 Oct 2024 10:03:04 GMT
- Title: Semantic Segmentation Based Quality Control of Histopathology Whole Slide Images
- Authors: Abhijeet Patil, Garima Jain, Harsh Diwakar, Jay Sawant, Tripti Bameta, Swapnil Rane, Amit Sethi,
- Abstract summary: We developed a software pipeline for quality control (QC) of histopathology whole slide images (WSIs)
It segments various regions, such as blurs of different levels, tissue regions, tissue folds, and pen marks.
It was evaluated in all TCGAs, which is the largest publicly available WSI dataset containing more than 11,000 histopathology images from 28 organs.
- Score: 2.953447779233234
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We developed a software pipeline for quality control (QC) of histopathology whole slide images (WSIs) that segments various regions, such as blurs of different levels, tissue regions, tissue folds, and pen marks. Given the necessity and increasing availability of GPUs for processing WSIs, the proposed pipeline comprises multiple lightweight deep learning models to strike a balance between accuracy and speed. The pipeline was evaluated in all TCGAs, which is the largest publicly available WSI dataset containing more than 11,000 histopathological images from 28 organs. It was compared to a previous work, which was not based on deep learning, and it showed consistent improvement in segmentation results across organs. To minimize annotation effort for tissue and blur segmentation, annotated images were automatically prepared by mosaicking patches (sub-images) from various WSIs whose labels were identified using a patch classification tool HistoROI. Due to the generality of our trained QC pipeline and its extensive testing the potential impact of this work is broad. It can be used for automated pre-processing any WSI cohort to enhance the accuracy and reliability of large-scale histopathology image analysis for both research and clinical use. We have made the trained models, training scripts, training data, and inference results publicly available at https://github.com/abhijeetptl5/wsisegqc, which should enable the research community to use the pipeline right out of the box or further customize it to new datasets and applications in the future.
Related papers
- PathAlign: A vision-language model for whole slide images in histopathology [13.567674461880905]
We develop a vision-language model based on the BLIP-2 framework using WSIs and curated text from pathology reports.
This enables applications utilizing a shared image-text embedding space, such as text or image retrieval for finding cases of interest.
We present pathologist evaluation of text generation and text retrieval using WSI embeddings, as well as results for WSI classification and workflow prioritization.
arXiv Detail & Related papers (2024-06-27T23:43:36Z) - A self-supervised framework for learning whole slide representations [52.774822784847565]
We present Slide Pre-trained Transformers (SPT) for gigapixel-scale self-supervision of whole slide images.
We benchmark SPT visual representations on five diagnostic tasks across three biomedical microscopy datasets.
arXiv Detail & Related papers (2024-02-09T05:05:28Z) - MUSTANG: Multi-Stain Self-Attention Graph Multiple Instance Learning
Pipeline for Histopathology Whole Slide Images [1.127806343149511]
Whole Slide Images (WSIs) present a challenging computer vision task due to their gigapixel size and presence of artefacts.
Real-world clinical datasets tend to come as sets of heterogeneous WSIs with labels present at the patient-level, with poor to no annotations.
Here we propose an end-to-end multi-stain self-attention graph (MUSTANG) multiple instance learning pipeline.
arXiv Detail & Related papers (2023-09-19T14:30:14Z) - Context-Aware Self-Supervised Learning of Whole Slide Images [0.0]
A novel two-stage learning technique is presented in this work.
A graph representation capturing all dependencies among regions in the WSI is very intuitive.
The entire slide is presented as a graph, where the nodes correspond to the patches from the WSI.
The proposed framework is then tested using WSIs from prostate and kidney cancers.
arXiv Detail & Related papers (2023-06-07T20:23:05Z) - CSP: Self-Supervised Contrastive Spatial Pre-Training for
Geospatial-Visual Representations [90.50864830038202]
We present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.
We use a dual-encoder to separately encode the images and their corresponding geo-locations, and use contrastive objectives to learn effective location representations from images.
CSP significantly boosts the model performance with 10-34% relative improvement with various labeled training data sampling ratios.
arXiv Detail & Related papers (2023-05-01T23:11:18Z) - A Knowledge Distillation framework for Multi-Organ Segmentation of
Medaka Fish in Tomographic Image [5.881800919492064]
We propose a self-training framework for multi-organ segmentation in tomographic images of Medaka fish.
We utilize the pseudo-labeled data from a pretrained model and adopt a Quality Teacher to refine the pseudo-labeled data.
The experimental results demonstrate that our method improves mean Intersection over Union (IoU) by 5.9% on the full dataset.
arXiv Detail & Related papers (2023-02-24T10:31:29Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - SIAN: Style-Guided Instance-Adaptive Normalization for Multi-Organ
Histopathology Image Synthesis [63.845552349914186]
We propose a style-guided instance-adaptive normalization (SIAN) to synthesize realistic color distributions and textures for different organs.
The four phases work together and are integrated into a generative network to embed image semantics, style, and instance-level boundaries.
arXiv Detail & Related papers (2022-09-02T16:45:46Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - HistoTransfer: Understanding Transfer Learning for Histopathology [9.231495418218813]
We compare the performance of features extracted from networks trained on ImageNet and histopathology data.
We investigate if features learned using more complex networks lead to gain in performance.
arXiv Detail & Related papers (2021-06-13T18:55:23Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.