An Investigation into Glomeruli Detection in Kidney H&E and PAS Images
using YOLO
- URL: http://arxiv.org/abs/2307.13199v1
- Date: Tue, 25 Jul 2023 01:35:37 GMT
- Title: An Investigation into Glomeruli Detection in Kidney H&E and PAS Images
using YOLO
- Authors: Kimia Hemmatirad, Morteza Babaie, Jeffrey Hodgin, Liron Pantanowitz,
H.R.Tizhoosh
- Abstract summary: This paper studies YOLO-v4 (You-Only-Look-Once), a real-time object detector for microscopic images.
YOLO uses a single neural network to predict several bounding boxes and class probabilities for objects of interest.
Multiple experiments have been designed and conducted based on different training data of two public datasets and a private dataset from the University of Michigan for fine-tuning the model.
The model was tested on the private dataset from the University of Michigan, serving as an external validation of two different stains, namely hematoxylin and eosin (H&E) and periodic acid-Schiff
- Score: 0.4012351415340318
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Context: Analyzing digital pathology images is necessary to draw diagnostic
conclusions by investigating tissue patterns and cellular morphology. However,
manual evaluation can be time-consuming, expensive, and prone to inter- and
intra-observer variability. Objective: To assist pathologists using
computerized solutions, automated tissue structure detection and segmentation
must be proposed. Furthermore, generating pixel-level object annotations for
histopathology images is expensive and time-consuming. As a result, detection
models with bounding box labels may be a feasible solution. Design: This paper
studies. YOLO-v4 (You-Only-Look-Once), a real-time object detector for
microscopic images. YOLO uses a single neural network to predict several
bounding boxes and class probabilities for objects of interest. YOLO can
enhance detection performance by training on whole slide images. YOLO-v4 has
been used in this paper. for glomeruli detection in human kidney images.
Multiple experiments have been designed and conducted based on different
training data of two public datasets and a private dataset from the University
of Michigan for fine-tuning the model. The model was tested on the private
dataset from the University of Michigan, serving as an external validation of
two different stains, namely hematoxylin and eosin (H&E) and periodic
acid-Schiff (PAS). Results: Average specificity and sensitivity for all
experiments, and comparison of existing segmentation methods on the same
datasets are discussed. Conclusions: Automated glomeruli detection in human
kidney images is possible using modern AI models. The design and validation for
different stains still depends on variability of public multi-stain datasets.
Related papers
- DiffDoctor: Diagnosing Image Diffusion Models Before Treating [57.82359018425674]
We propose DiffDoctor, a two-stage pipeline to assist image diffusion models in generating fewer artifacts.
We collect a dataset of over 1M flawed synthesized images and set up an efficient human-in-the-loop annotation process.
The learned artifact detector is then involved in the second stage to tune the diffusion model through assigning a per-pixel confidence map for each image.
arXiv Detail & Related papers (2025-01-21T18:56:41Z) - An analysis of data variation and bias in image-based dermatological datasets for machine learning classification [2.039829968340841]
In clinical dermatology, classification models can detect malignant lesions on patients' skin using only RGB images as input.
Most learning-based methods employ data acquired from dermoscopic datasets on training, which are large and validated by a gold standard.
This work aims to evaluate the gap between dermoscopic and clinical samples and understand how the dataset variations impact training.
arXiv Detail & Related papers (2025-01-15T17:18:46Z) - Grad-CAMO: Learning Interpretable Single-Cell Morphological Profiles from 3D Cell Painting Images [0.0]
We introduce Grad-CAMO, a novel single-cell interpretability score for supervised feature extractors.
Grad-CAMO measures the proportion of a model's attention that is concentrated on the cell of interest versus the background.
arXiv Detail & Related papers (2024-03-26T11:48:37Z) - MedYOLO: A Medical Image Object Detection Framework [0.0]
We report on MedYOLO, a 3-D object detection framework using the one-shot detection method of the YOLO family of models.
We found our models achieve high performance on commonly present medium and large-sized structures such as the heart, liver, and pancreas.
arXiv Detail & Related papers (2023-12-12T20:46:14Z) - Zero-shot Model Diagnosis [80.36063332820568]
A common approach to evaluate deep learning models is to build a labeled test set with attributes of interest and assess how well it performs.
This paper argues the case that Zero-shot Model Diagnosis (ZOOM) is possible without the need for a test set nor labeling.
arXiv Detail & Related papers (2023-03-27T17:59:33Z) - Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - An unobtrusive quality supervision approach for medical image annotation [8.203076178571576]
It is desirable that users should annotate unseen data and have an automated system to unobtrusively rate their performance.
We evaluate two methods the generation of synthetic individual cell images: conditional Generative Adversarial Networks and Diffusion Models.
Users could not detect 52.12% of generated images by proofing the feasibility to replace the original cells with synthetic cells without being noticed.
arXiv Detail & Related papers (2022-11-11T11:57:26Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Comparisons among different stochastic selection of activation layers
for convolutional neural networks for healthcare [77.99636165307996]
We classify biomedical images using ensembles of neural networks.
We select our activations among the following ones: ReLU, leaky ReLU, Parametric ReLU, ELU, Adaptive Piecewice Linear Unit, S-Shaped ReLU, Swish, Mish, Mexican Linear Unit, Parametric Deformable Linear Unit, Soft Root Sign.
arXiv Detail & Related papers (2020-11-24T01:53:39Z) - Improving Calibration and Out-of-Distribution Detection in Medical Image
Segmentation with Convolutional Neural Networks [8.219843232619551]
Convolutional Neural Networks (CNNs) have shown to be powerful medical image segmentation models.
We advocate for multi-task learning, i.e., training a single model on several different datasets.
We show that not only a single CNN learns to automatically recognize the context and accurately segment the organ of interest in each context, but also that such a joint model often has more accurate and better-calibrated predictions.
arXiv Detail & Related papers (2020-04-12T23:42:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.