HistoART: Histopathology Artifact Detection and Reporting Tool
- URL: http://arxiv.org/abs/2507.00044v1
- Date: Mon, 23 Jun 2025 17:22:19 GMT
- Title: HistoART: Histopathology Artifact Detection and Reporting Tool
- Authors: Seyed Kahaki, Alexander R. Webber, Ghada Zamzmi, Adarsh Subbaswamy, Rucha Deshpande, Aldo Badano,
- Abstract summary: Whole Slide Imaging (WSI) is widely used to digitize tissue specimens for detailed, high-resolution examination.<n>WSI remains vulnerable to artifacts introduced during slide preparation and scanning.<n>We propose and compare three robust artifact detection approaches for WSIs.
- Score: 37.31105955164019
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In modern cancer diagnostics, Whole Slide Imaging (WSI) is widely used to digitize tissue specimens for detailed, high-resolution examination; however, other diagnostic approaches, such as liquid biopsy and molecular testing, are also utilized based on the cancer type and clinical context. While WSI has revolutionized digital histopathology by enabling automated, precise analysis, it remains vulnerable to artifacts introduced during slide preparation and scanning. These artifacts can compromise downstream image analysis. To address this challenge, we propose and compare three robust artifact detection approaches for WSIs: (1) a foundation model-based approach (FMA) using a fine-tuned Unified Neural Image (UNI) architecture, (2) a deep learning approach (DLA) built on a ResNet50 backbone, and (3) a knowledge-based approach (KBA) leveraging handcrafted features from texture, color, and frequency-based metrics. The methods target six common artifact types: tissue folds, out-of-focus regions, air bubbles, tissue damage, marker traces, and blood contamination. Evaluations were conducted on 50,000+ image patches from diverse scanners (Hamamatsu, Philips, Leica Aperio AT2) across multiple sites. The FMA achieved the highest patch-wise AUROC of 0.995 (95% CI [0.994, 0.995]), outperforming the ResNet50-based method (AUROC: 0.977, 95% CI [0.977, 0.978]) and the KBA (AUROC: 0.940, 95% CI [0.933, 0.946]). To translate detection into actionable insights, we developed a quality report scorecard that quantifies high-quality patches and visualizes artifact distributions.
Related papers
- Deep learning in computed tomography pulmonary angiography imaging: a
dual-pronged approach for pulmonary embolism detection [0.0]
The aim of this study is to leverage deep learning techniques to enhance the Computer Assisted Diagnosis (CAD) of Pulmonary Embolism (PE)
Our classification system includes an Attention-Guided Convolutional Neural Network (AG-CNN) that uses local context by employing an attention mechanism.
AG-CNN achieves robust performance on the FUMPE dataset, achieving an AUROC of 0.927, sensitivity of 0.862, specificity of 0.879, and an F1-score of 0.805 with the Inception-v3 backbone architecture.
arXiv Detail & Related papers (2023-11-09T08:23:44Z) - Using Multiple Dermoscopic Photographs of One Lesion Improves Melanoma
Classification via Deep Learning: A Prognostic Diagnostic Accuracy Study [0.0]
This study evaluated the impact of multiple real-world dermoscopic views of a single lesion of interest on a CNN-based melanoma classifier.
Using multiple real-world images is an inexpensive method to positively impact the performance of a CNN-based melanoma classifier.
arXiv Detail & Related papers (2023-06-05T11:55:57Z) - Vision Transformer for Efficient Chest X-ray and Gastrointestinal Image
Classification [2.3293678240472517]
This study uses different CNNs and transformer-based methods with a wide range of data augmentation techniques.
We evaluated their performance on three medical image datasets from different modalities.
arXiv Detail & Related papers (2023-04-23T04:07:03Z) - Improving Automated Hemorrhage Detection in Sparse-view Computed Tomography via Deep Convolutional Neural Network based Artifact Reduction [3.9874211732430447]
We trained a U-Net for artefact reduction on simulated sparse-view cranial CT scans from 3000 patients.
We also trained a convolutional neural network on fully sampled CT data from 17,545 patients for automated hemorrhage detection.
The U-Net performed superior compared to unprocessed and TV-processed images with respect to image quality and automated hemorrhage diagnosis.
arXiv Detail & Related papers (2023-03-16T14:21:45Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - WSSS4LUAD: Grand Challenge on Weakly-supervised Tissue Semantic
Segmentation for Lung Adenocarcinoma [51.50991881342181]
This challenge includes 10,091 patch-level annotations and over 130 million labeled pixels.
First place team achieved mIoU of 0.8413 (tumor: 0.8389, stroma: 0.7931, normal: 0.8919)
arXiv Detail & Related papers (2022-04-13T15:27:05Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Chest x-ray automated triage: a semiologic approach designed for
clinical implementation, exploiting different types of labels through a
combination of four Deep Learning architectures [83.48996461770017]
This work presents a Deep Learning method based on the late fusion of different convolutional architectures.
We built four training datasets combining images from public chest x-ray datasets and our institutional archive.
We trained four different Deep Learning architectures and combined their outputs with a late fusion strategy, obtaining a unified tool.
arXiv Detail & Related papers (2020-12-23T14:38:35Z) - Critical Evaluation of Deep Neural Networks for Wrist Fracture Detection [1.0617212070722408]
Wrist Fracture is the most common type of fracture with a high incidence rate.
Recent advances in the field of Deep Learning (DL) have shown that wrist fracture detection can be automated using Convolutional Neural Networks.
Our results reveal that a typical state-of-the-art approach, such as DeepWrist, has a substantially lower performance on the challenging test set.
arXiv Detail & Related papers (2020-12-04T13:35:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.