A Single Detect Focused YOLO Framework for Robust Mitotic Figure Detection
- URL: http://arxiv.org/abs/2509.02637v1
- Date: Mon, 01 Sep 2025 20:41:48 GMT
- Title: A Single Detect Focused YOLO Framework for Robust Mitotic Figure Detection
- Authors: Yasemin Topuz, M. Taha Gökcan, Serdar Yıldız, Songül Varlı,
- Abstract summary: We introduce SDF-YOLO, a lightweight yet domain-robust detection framework for small, rare targets such as mitotic figures.<n>The model builds on YOLOv11 with task-specific modifications, including a single detection head aligned with mitotic figure scale.<n>It achieved an average precision (AP) of 0.799, with a precision of 0.758, a recall of 0.775, an F1 score of 0.766, and an FROC-AUC of 5.793.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Mitotic figure detection is a crucial task in computational pathology, as mitotic activity serves as a strong prognostic marker for tumor aggressiveness. However, domain variability that arises from differences in scanners, tissue types, and staining protocols poses a major challenge to the robustness of automated detection methods. In this study, we introduce SDF-YOLO (Single Detect Focused YOLO), a lightweight yet domain-robust detection framework designed specifically for small, rare targets such as mitotic figures. The model builds on YOLOv11 with task-specific modifications, including a single detection head aligned with mitotic figure scale, coordinate attention to enhance positional sensitivity, and improved cross-channel feature mixing. Experiments were conducted on three datasets that span human and canine tumors: MIDOG ++, canine cutaneous mast cell tumor (CCMCT), and canine mammary carcinoma (CMC). When submitted to the preliminary test set for the MIDOG2025 challenge, SDF-YOLO achieved an average precision (AP) of 0.799, with a precision of 0.758, a recall of 0.775, an F1 score of 0.766, and an FROC-AUC of 5.793, demonstrating both competitive accuracy and computational efficiency. These results indicate that SDF-YOLO provides a reliable and efficient framework for robust mitotic figure detection across diverse domains.
Related papers
- Artefact-Aware Fungal Detection in Dermatophytosis: A Real-Time Transformer-Based Approach for KOH Microscopy [0.28909295536379814]
This study presents a transformer-based detection framework using the RT-DETR model architecture.<n>It achieves precise, query-driven localization of fungal structures in high-resolution potassium hydroxide (KOH) images.
arXiv Detail & Related papers (2026-02-22T12:35:17Z) - CONFIDE: Hallucination Assessment for Reliable Biomolecular Structure Prediction and Design [46.12506067241116]
We present CODE (Chain of Diffusion Embeddings), a self evaluating metric to quantify topological frustration.<n>We propose CONFIDE, a unified evaluation framework that combines energetic and topological perspectives.<n>By combining data driven embeddings with theoretical insight, CODE and CONFIDE outperform existing metrics across a wide range of biomolecular systems.
arXiv Detail & Related papers (2025-11-20T03:38:46Z) - An Explainable Hybrid AI Framework for Enhanced Tuberculosis and Symptom Detection [55.35661671061754]
Tuberculosis remains a critical global health issue, particularly in resource-limited and remote areas.<n>We propose a framework which enhances disease and symptom detection on chest X-rays by integrating two supervised heads and a self-supervised head.<n>Our model achieves an accuracy of 98.85% for distinguishing between COVID-19, tuberculosis, and normal cases, and a macro-F1 score of 90.09% for multilabel symptom detection.
arXiv Detail & Related papers (2025-10-21T17:18:55Z) - Peptidomic-Based Prediction Model for Coronary Heart Disease Using a Multilayer Perceptron Neural Network [0.0]
Coronary heart disease (CHD) is a leading cause of death worldwide and contributes significantly to annual healthcare expenditures.<n>To develop a non-invasive diagnostic approach, we designed a model based on a multilayer perceptron (MLP) neural network.<n>The model achieved a precision, sensitivity, and specificity of 95.67 percent, with an F1-score of 0.9565.
arXiv Detail & Related papers (2025-09-04T04:54:02Z) - Ensemble YOLO Framework for Multi-Domain Mitotic Figure Detection in Histopathology Images [0.7541656202645494]
Two modern one-stage detectors, YOLOv5 and YOLOv8, were trained on MIDOG++, CMC, and CCMCT datasets.<n>YOLOv5 achieved superior precision, while YOLOv8 provided improved recall.<n>These findings highlight the effectiveness of ensemble strategies to advance automated mitosis detection in digital pathology.
arXiv Detail & Related papers (2025-09-03T02:43:02Z) - RF-DETR for Robust Mitotic Figure Detection: A MIDOG 2025 Track 1 Approach [0.0]
This paper presents our approach for the MIDOG 2025 challenge Track 1, focusing on robust mitotic figure detection across diverse histological contexts.<n>We employed RF-DETR (Roboflow Detection Transformer) with hard negative mining, trained on MIDOG++ dataset.<n>On the preliminary test set, our method achieved an F1 score of 0.789 with a recall of 0.839 and precision of 0.746, demonstrating effective generalization across unseen domains.
arXiv Detail & Related papers (2025-08-29T16:04:50Z) - Robust Pan-Cancer Mitotic Figure Detection with YOLOv12 [1.2228119373158255]
We present a mitotic figure detection approach based on the state-of-the-art YOLOv12 object detection architecture.<n>Our method achieved an F1-score of 0.801 on the preliminary test set (hotspots only) and ranked second on the final test leaderboard with an F1-score of 0.7216.
arXiv Detail & Related papers (2025-08-29T08:37:46Z) - A bag of tricks for real-time Mitotic Figure detection [0.0]
We build on the efficient RTMDet single stage object detector to achieve high inference speed suitable for clinical deployment.<n>We employ targeted, hard negative mining on necrotic and debris tissue to reduce false positives.<n>On the preliminary test set of the MItosis DOmain Generalization (MIDOG) 2025 challenge, our single-stage RTMDet-S based approach reaches an F1 of 0.81.
arXiv Detail & Related papers (2025-08-27T11:45:44Z) - LGE-Guided Cross-Modality Contrastive Learning for Gadolinium-Free Cardiomyopathy Screening in Cine CMR [51.11296719862485]
We propose a Contrastive Learning and Cross-Modal alignment framework for gadolinium-free cardiomyopathy screening using cine CMR sequences.<n>By aligning the latent spaces of cine CMR and Late Gadolinium Enhancement (LGE) sequences, our model encodes fibrosis-specific pathology into cine CMR embeddings.
arXiv Detail & Related papers (2025-08-23T07:21:23Z) - A Novel Attention-Augmented Wavelet YOLO System for Real-time Brain Vessel Segmentation on Transcranial Color-coded Doppler [49.03919553747297]
We propose an AI-powered, real-time CoW auto-segmentation system capable of efficiently capturing cerebral arteries.<n>No prior studies have explored AI-driven cerebrovascular segmentation using Transcranial Color-coded Doppler (TCCD)<n>The proposed AAW-YOLO demonstrated strong performance in segmenting both ipsilateral and contralateral CoW vessels.
arXiv Detail & Related papers (2025-08-19T14:41:22Z) - CRTRE: Causal Rule Generation with Target Trial Emulation Framework [47.2836994469923]
We introduce a novel method called causal rule generation with target trial emulation framework (CRTRE)
CRTRE applies randomize trial design principles to estimate the causal effect of association rules.
We then incorporate such association rules for the downstream applications such as prediction of disease onsets.
arXiv Detail & Related papers (2024-11-10T02:40:06Z) - Multi-centric AI Model for Unruptured Intracranial Aneurysm Detection and Volumetric Segmentation in 3D TOF-MRI [6.397650339311053]
We developed an open-source nnU-Net-based AI model for combined detection and segmentation of unruptured intracranial aneurysms (UICA) in 3D TOF-MRI.
Four distinct training datasets were created, and the nnU-Net framework was used for model development.
The primary model showed 85% sensitivity and 0.23 FP/case rate, outperforming the ADAM-challenge winner (61%) and a nnU-Net trained on ADAM data (51%) in sensitivity.
arXiv Detail & Related papers (2024-08-30T08:57:04Z) - OMG-Net: A Deep Learning Framework Deploying Segment Anything to Detect Pan-Cancer Mitotic Figures from Haematoxylin and Eosin-Stained Slides [27.84599956781646]
In this study, we propose an artificial intelligence (AI) approach to detect MFs in digitised whole slide images (WSIs)
Here we establish the largest pan-cancer dataset of mitotic figures by combining an in-house dataset of soft tissue tumours (STMF) with five open-source mitotic datasets (IPAC, TUPAC, CCMCT, CMC and MIDOG++)
We then employed a two-stage framework (Optimised Mitoses Generator Network (OMG-Net)) classify MFs.
arXiv Detail & Related papers (2024-07-17T17:53:37Z) - AXIAL: Attention-based eXplainability for Interpretable Alzheimer's Localized Diagnosis using 2D CNNs on 3D MRI brain scans [43.06293430764841]
This study presents an innovative method for Alzheimer's disease diagnosis using 3D MRI designed to enhance the explainability of model decisions.
Our approach adopts a soft attention mechanism, enabling 2D CNNs to extract volumetric representations.
With voxel-level precision, our method identified which specific areas are being paid attention to, identifying these predominant brain regions.
arXiv Detail & Related papers (2024-07-02T16:44:00Z) - Corneal endothelium assessment in specular microscopy images with Fuchs'
dystrophy via deep regression of signed distance maps [48.498376125522114]
This paper proposes a UNet-based segmentation approach that requires minimal post-processing.
It achieves reliable CE morphometric assessment and guttae identification across all degrees of Fuchs' dystrophy.
arXiv Detail & Related papers (2022-10-13T15:34:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.