Mind the Gap: Continuous Magnification Sampling for Pathology Foundation Models
- URL: http://arxiv.org/abs/2601.02198v1
- Date: Mon, 05 Jan 2026 15:19:59 GMT
- Title: Mind the Gap: Continuous Magnification Sampling for Pathology Foundation Models
- Authors: Alexander Möllers, Julius Hense, Florian Schulz, Timo Milbich, Maximilian Alber, Lukas Ruff,
- Abstract summary: We show that the widely used discrete uniform sampling of magnifications leads to degradation at intermediate magnifications.<n>We derive sampling distributions that optimize representation quality across magnification scales.<n>Experiments show that continuous sampling substantially improves over discrete sampling at intermediate magnifications.
- Score: 39.846652646235036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In histopathology, pathologists examine both tissue architecture at low magnification and fine-grained morphology at high magnification. Yet, the performance of pathology foundation models across magnifications and the effect of magnification sampling during training remain poorly understood. We model magnification sampling as a multi-source domain adaptation problem and develop a simple theoretical framework that reveals systematic trade-offs between sampling strategies. We show that the widely used discrete uniform sampling of magnifications (0.25, 0.5, 1.0, 2.0 mpp) leads to degradation at intermediate magnifications. We introduce continuous magnification sampling, which removes gaps in magnification coverage while preserving performance at standard scales. Further, we derive sampling distributions that optimize representation quality across magnification scales. To evaluate these strategies, we introduce two new benchmarks (TCGA-MS, BRACS-MS) with appropriate metrics. Our experiments show that continuous sampling substantially improves over discrete sampling at intermediate magnifications, with gains of up to 4 percentage points in balanced classification accuracy, and that optimized distributions can further improve performance. Finally, we evaluate current histopathology foundation models, finding that magnification is a primary driver of performance variation across models. Our work paves the way towards future pathology foundation models that perform reliably across magnifications.
Related papers
- Investigating the Impact of Histopathological Foundation Models on Regressive Prediction of Homologous Recombination Deficiency [52.50039435394964]
We systematically evaluate foundation models for regression-based tasks.<n>We extract patch-level features from whole slide images (WSI) using five state-of-the-art foundation models.<n>Models are trained to predict continuous HRD scores based on these extracted features across breast, endometrial, and lung cancer cohorts.
arXiv Detail & Related papers (2026-01-29T14:06:50Z) - Foundation Models in Dermatopathology: Skin Tissue Classification [0.05397680436511065]
This study evaluates the performance of two foundation models, UNI and Virchow2, as feature extractors for classifying whole-slide images.<n> Patch-level embeddings were aggregated into slide-level features using a mean-aggregation strategy.<n>Results demonstrate that patch-level features extracted using Virchow2 outperformed those extracted via UNI across most slide-level classifiers.
arXiv Detail & Related papers (2025-10-24T17:21:43Z) - M^3-GloDets: Multi-Region and Multi-Scale Analysis of Fine-Grained Diseased Glomerular Detection [8.016032806222892]
We present M3-GloDet, a systematic framework designed to enable thorough evaluation of detection models.<n>We evaluate both long-standing benchmark architectures and recently introduced state-of-the-art models that have achieved notable performance.<n>Our aim is to advance the understanding of model strengths and limitations, and to offer actionable insights for the refinement of automated detection strategies.
arXiv Detail & Related papers (2025-08-25T04:52:34Z) - PWD: Prior-Guided and Wavelet-Enhanced Diffusion Model for Limited-Angle CT [6.532073662427578]
We propose a prior information embedding and wavelet feature fusion fast sampling diffusion model for LACT reconstruction.<n>The PWD enables efficient sampling while preserving reconstruction fidelity in LACT.<n>Using only 50 sampling steps, PWD achieves at least 1.7 dB improvement in PSNR and 10% gain in SSIM.
arXiv Detail & Related papers (2025-06-30T08:28:32Z) - Diffusion Models in Low-Level Vision: A Survey [82.77962165415153]
diffusion model-based solutions have emerged as widely acclaimed for their ability to produce samples of superior quality and diversity.<n>We present three generic diffusion modeling frameworks and explore their correlations with other deep generative models.<n>We summarize extended diffusion models applied in other tasks, including medical, remote sensing, and video scenarios.
arXiv Detail & Related papers (2024-06-17T01:49:27Z) - Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood [64.95663299945171]
Training energy-based models (EBMs) on high-dimensional data can be both challenging and time-consuming.
There exists a noticeable gap in sample quality between EBMs and other generative frameworks like GANs and diffusion models.
We propose cooperative diffusion recovery likelihood (CDRL), an effective approach to tractably learn and sample from a series of EBMs.
arXiv Detail & Related papers (2023-09-10T22:05:24Z) - Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion
Generative Models [75.52575380824051]
We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI.
We use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process.
Our method requires as few as five training images to learn effective sampling patterns.
arXiv Detail & Related papers (2023-06-05T22:09:06Z) - DiffMIC: Dual-Guidance Diffusion Network for Medical Image
Classification [32.67098520984195]
We propose the first diffusion-based model (named DiffMIC) to address general medical image classification.
Our experimental results demonstrate that DiffMIC outperforms state-of-the-art methods by a significant margin.
arXiv Detail & Related papers (2023-03-19T09:15:45Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - StyPath: Style-Transfer Data Augmentation For Robust Histology Image
Classification [6.690876060631452]
We propose a novel pipeline to build robust deep neural networks for AMR classification based on StyPath.
Each image was generated in 1.84 + 0.03 seconds using a single GTX V TITAN and pytorch.
Our results imply that our style-transfer augmentation technique improves histological classification performance.
arXiv Detail & Related papers (2020-07-09T18:02:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.