Training state-of-the-art pathology foundation models with orders of magnitude less data
- URL: http://arxiv.org/abs/2504.05186v1
- Date: Mon, 07 Apr 2025 15:38:12 GMT
- Title: Training state-of-the-art pathology foundation models with orders of magnitude less data
- Authors: Mikhail Karasikov, Joost van Doorn, Nicolas Känzig, Melis Erdal Cesur, Hugo Mark Horlings, Robert Berke, Fei Tang, Sebastian Otálora,
- Abstract summary: We present three novel vision foundation models (FMs) trained on up to two orders of magnitude fewer WSIs than those used to train other state-of-the-art FMs.<n>Even the model trained on TCGA alone (12k WSIs) outperforms most existing FMs and, on average, matches Virchow2, the second-best FM published to date.
- Score: 1.7005561101170015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of computational pathology has recently seen rapid advances driven by the development of modern vision foundation models (FMs), typically trained on vast collections of pathology images. Recent studies demonstrate that increasing the training data set and model size and integrating domain-specific image processing techniques can significantly enhance the model's performance on downstream tasks. Building on these insights, our work incorporates several recent modifications to the standard DINOv2 framework from the literature to optimize the training of pathology FMs. We also apply a post-training procedure for fine-tuning models on higher-resolution images to further enrich the information encoded in the embeddings. We present three novel pathology FMs trained on up to two orders of magnitude fewer WSIs than those used to train other state-of-the-art FMs while demonstrating a comparable or superior performance on downstream tasks. Even the model trained on TCGA alone (12k WSIs) outperforms most existing FMs and, on average, matches Virchow2, the second-best FM published to date. This suggests that there still remains a significant potential for further improving the models and algorithms used to train pathology FMs to take full advantage of the vast data collections.
Related papers
- Vision Foundation Models in Medical Image Analysis: Advances and Challenges [7.224426395050136]
Vision Foundation Models (VFMs) have sparked significant advances in the field of medical image analysis.
This paper reviews the state-of-the-art research on the adaptation of VFMs to medical image segmentation.
We discuss the latest developments in adapter-based improvements, knowledge distillation techniques, and multi-scale contextual feature modeling.
arXiv Detail & Related papers (2025-02-20T14:13:46Z) - Specialized Foundation Models Struggle to Beat Supervised Baselines [60.23386520331143]
We look at three modalities -- genomics, satellite imaging, and time series -- with multiple recent FMs and compare them to a standard supervised learning workflow.<n>We find that it is consistently possible to train simple supervised models that match or even outperform the latest foundation models.
arXiv Detail & Related papers (2024-11-05T04:10:59Z) - PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - YaART: Yet Another ART Rendering Technology [119.09155882164573]
This study introduces YaART, a novel production-grade text-to-image cascaded diffusion model aligned to human preferences.
We analyze how these choices affect both the efficiency of the training process and the quality of the generated images.
We demonstrate that models trained on smaller datasets of higher-quality images can successfully compete with those trained on larger datasets.
arXiv Detail & Related papers (2024-04-08T16:51:19Z) - Towards Large-Scale Training of Pathology Foundation Models [1.5861468117231254]
We release and make publicly available the first batch of our pathology FMs trained on open-access TCGA whole slide images.
The experimental evaluation shows that our models reach state-of-the-art performance on various patch-level downstream tasks.
We present an open-source framework designed for the consistent evaluation of pathology FMs across various downstream tasks.
arXiv Detail & Related papers (2024-03-24T21:34:36Z) - Learning from models beyond fine-tuning [78.20895343699658]
Learn From Model (LFM) focuses on the research, modification, and design of foundation models (FM) based on the model interface.<n>The study of LFM techniques can be broadly categorized into five major areas: model tuning, model distillation, model reuse, meta learning and model editing.<n>This paper gives a comprehensive review of the current methods based on FM from the perspective of LFM.
arXiv Detail & Related papers (2023-10-12T10:20:36Z) - DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning [3.9858496473361402]
We propose a data augmentation strategy, named DFM-X, that leverages knowledge about frequency shortcuts.
We randomly select X% training images of certain classes for augmentation, and process them by retaining the frequencies included in the DFMs of other classes.
Our experimental results demonstrate that DFM-X improves robustness against common corruptions and adversarial attacks.
arXiv Detail & Related papers (2023-08-12T17:39:10Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Domain Shift in Computer Vision models for MRI data analysis: An
Overview [64.69150970967524]
Machine learning and computer vision methods are showing good performance in medical imagery analysis.
Yet only a few applications are now in clinical use.
Poor transferability of themodels to data from different sources or acquisition domains is one of the reasons for that.
arXiv Detail & Related papers (2020-10-14T16:34:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.