DeepAf: One-Shot Spatiospectral Auto-Focus Model for Digital Pathology
- URL: http://arxiv.org/abs/2510.05315v1
- Date: Mon, 06 Oct 2025 19:28:08 GMT
- Title: DeepAf: One-Shot Spatiospectral Auto-Focus Model for Digital Pathology
- Authors: Yousef Yeganeh, Maximilian Frantzen, Michael Lee, Kun-Hsing Yu, Nassir Navab, Azade Farshad,
- Abstract summary: We introduce a novel automated microscopic system powered by DeepAf.<n>DeepAf combines spatial and spectral features through a hybrid architecture for single-shot focus prediction.<n>System achieves 0.90 AUC in cancer classification at 4x magnification, a significant achievement at lower magnification than typical 20x WSI scans.
- Score: 37.648157065553185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While Whole Slide Imaging (WSI) scanners remain the gold standard for digitizing pathology samples, their high cost limits accessibility in many healthcare settings. Other low-cost solutions also face critical limitations: automated microscopes struggle with consistent focus across varying tissue morphology, traditional auto-focus methods require time-consuming focal stacks, and existing deep-learning approaches either need multiple input images or lack generalization capability across tissue types and staining protocols. We introduce a novel automated microscopic system powered by DeepAf, a novel auto-focus framework that uniquely combines spatial and spectral features through a hybrid architecture for single-shot focus prediction. The proposed network automatically regresses the distance to the optimal focal point using the extracted spatiospectral features and adjusts the control parameters for optimal image outcomes. Our system transforms conventional microscopes into efficient slide scanners, reducing focusing time by 80% compared to stack-based methods while achieving focus accuracy of 0.18 {\mu}m on the same-lab samples, matching the performance of dual-image methods (0.19 {\mu}m) with half the input requirements. DeepAf demonstrates robust cross-lab generalization with only 0.72% false focus predictions and 90% of predictions within the depth of field. Through an extensive clinical study of 536 brain tissue samples, our system achieves 0.90 AUC in cancer classification at 4x magnification, a significant achievement at lower magnification than typical 20x WSI scans. This results in a comprehensive hardware-software design enabling accessible, real-time digital pathology in resource-constrained settings while maintaining diagnostic accuracy.
Related papers
- A bag of tricks for real-time Mitotic Figure detection [0.0]
We build on the efficient RTMDet single stage object detector to achieve high inference speed suitable for clinical deployment.<n>We employ targeted, hard negative mining on necrotic and debris tissue to reduce false positives.<n>On the preliminary test set of the MItosis DOmain Generalization (MIDOG) 2025 challenge, our single-stage RTMDet-S based approach reaches an F1 of 0.81.
arXiv Detail & Related papers (2025-08-27T11:45:44Z) - Lightweight Relational Embedding in Task-Interpolated Few-Shot Networks for Enhanced Gastrointestinal Disease Classification [0.0]
Colon cancer detection is crucial for increasing patient survival rates.<n> colonoscopy is dependent on obtaining adequate and high-quality endoscopic images.<n>Few-Shot Learning architecture enables our model to rapidly adapt to unseen fine-grained endoscopic image patterns.<n>Our model demonstrated superior performance, achieving an accuracy of 90.1%, precision of 0.845, recall of 0.942, and an F1 score of 0.891.
arXiv Detail & Related papers (2025-05-30T16:54:51Z) - BlurryScope enables compact, cost-effective scanning microscopy for HER2 scoring using deep learning on blurry images [0.0]
"BlurryScope" is a cost-effective and compact solution for automated inspection and analysis of tissue sections.<n>We implemented automated classification of human epidermal growth factor receptor 2 (HER2) scores on motion-red images of stained breast tissue sections.<n>We achieved testing accuracies of 79.3% and 89.7% for 4-class (0, 1+, 2+, 3+) and 2-class (0/1+, 2+/3+) HER2 classification, respectively.
arXiv Detail & Related papers (2024-10-23T04:46:36Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
We propose a unified MRI reconstruction model robust to various measurement undersampling patterns and image resolutions.<n>Our model improves SSIM by 11% and PSNR by 4 dB over a state-of-the-art CNN (End-to-End VarNet) with 600$times$ faster inference than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Deep Reinforcement Learning Based System for Intraoperative
Hyperspectral Video Autofocusing [2.476200036182773]
This work integrates a focus-tunable liquid lens into a video HSI exoscope.
A first-of-its-kind robotic focal-time scan was performed to create a realistic and reproducible testing dataset.
arXiv Detail & Related papers (2023-07-21T15:04:21Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Unsupervised deep learning techniques for powdery mildew recognition
based on multispectral imaging [63.62764375279861]
This paper presents a deep learning approach to automatically recognize powdery mildew on cucumber leaves.
We focus on unsupervised deep learning techniques applied to multispectral imaging data.
We propose the use of autoencoder architectures to investigate two strategies for disease detection.
arXiv Detail & Related papers (2021-12-20T13:29:13Z) - Increasing a microscope's effective field of view via overlapped imaging
and machine learning [4.23935174235373]
This work demonstrates a multi-lens microscopic imaging system that overlaps multiple independent fields of view on a single sensor for high-efficiency automated specimen analysis.
arXiv Detail & Related papers (2021-10-10T22:52:36Z) - AOSLO-net: A deep learning-based method for automatic segmentation of
retinal microaneurysms from adaptive optics scanning laser ophthalmoscope
images [3.8848390007421196]
We introduce AOSLO-net, a deep neural network framework with customized training policy, to automatically segment MAs from AOSLO images.
We evaluate the performance of AOSLO-net using 87 DR AOSLO images demonstrating very accurate MA detection and segmentation.
arXiv Detail & Related papers (2021-06-05T05:06:36Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.