Is Self-Supervision Enough? Benchmarking Foundation Models Against End-to-End Training for Mitotic Figure Classification
- URL: http://arxiv.org/abs/2412.06365v2
- Date: Mon, 16 Dec 2024 09:25:31 GMT
- Title: Is Self-Supervision Enough? Benchmarking Foundation Models Against End-to-End Training for Mitotic Figure Classification
- Authors: Jonathan Ganz, Jonas Ammeling, Emely Rosbach, Ludwig Lausser, Christof A. Bertram, Katharina Breininger, Marc Aubreville,
- Abstract summary: Foundation models (FMs) have become popular and available recently for the domain of histopathology.
In this work, we investigate to which degree this also holds for mitotic figure classification.
We found that the end-to-end-trained baseline outperformed all FM-based classifiers, regardless of the amount of data provided.
- Score: 0.37334049820361814
- License:
- Abstract: Foundation models (FMs), i.e., models trained on a vast amount of typically unlabeled data, have become popular and available recently for the domain of histopathology. The key idea is to extract semantically rich vectors from any input patch, allowing for the use of simple subsequent classification networks potentially reducing the required amounts of labeled data, and increasing domain robustness. In this work, we investigate to which degree this also holds for mitotic figure classification. Utilizing two popular public mitotic figure datasets, we compared linear probing of five publicly available FMs against models trained on ImageNet and a simple ResNet50 end-to-end-trained baseline. We found that the end-to-end-trained baseline outperformed all FM-based classifiers, regardless of the amount of data provided. Additionally, we did not observe the FM-based classifiers to be more robust against domain shifts, rendering both of the above assumptions incorrect.
Related papers
- Foundation Model or Finetune? Evaluation of few-shot semantic segmentation for river pollution [16.272314073324626]
Foundation models (FMs) are a popular topic of research in AI.
In this work, we compare the performance of FMs to finetuned pre-trained supervised models in the task of semantic segmentation.
We see that finetuned models consistently outperform the FMs tested, even in cases were data is scarce.
arXiv Detail & Related papers (2024-09-05T17:59:32Z) - A separability-based approach to quantifying generalization: which layer is best? [0.0]
Generalization to unseen data remains poorly understood for deep learning classification and foundation models.
We provide a new method for evaluating the capacity of networks to represent a sampled domain.
We find that (i) high classification accuracy does not imply high generalizability; and (ii) deeper layers in a model do not always generalize the best.
arXiv Detail & Related papers (2024-05-02T17:54:35Z) - Generalized Logit Adjustment: Calibrating Fine-tuned Models by Removing Label Bias in Foundation Models [75.9543301303586]
Foundation models like CLIP allow zero-shot transfer on various tasks without additional training data.
Fine-tuning and ensembling are also commonly adopted to better fit the downstream tasks.
However, we argue that prior work has overlooked the inherent biases in foundation models.
arXiv Detail & Related papers (2023-10-12T08:01:11Z) - RAHNet: Retrieval Augmented Hybrid Network for Long-tailed Graph
Classification [10.806893809269074]
We propose a novel framework called Retrieval Augmented Hybrid Network (RAHNet) to jointly learn a robust feature extractor and an unbiased classifier.
In the feature extractor training stage, we develop a graph retrieval module to search for relevant graphs that directly enrich the intra-class diversity for the tail classes.
We also innovatively optimize a category-centered supervised contrastive loss to obtain discriminative representations.
arXiv Detail & Related papers (2023-08-04T14:06:44Z) - Distilling BlackBox to Interpretable models for Efficient Transfer
Learning [19.40897632956169]
Building generalizable AI models is one of the primary challenges in the healthcare domain.
Fine-tuning a model to transfer knowledge from one domain to another requires a significant amount of labeled data in the target domain.
We develop an interpretable model that can be efficiently fine-tuned to an unseen target domain with minimal computational cost.
arXiv Detail & Related papers (2023-05-26T23:23:48Z) - GMM-IL: Image Classification using Incrementally Learnt, Independent
Probabilistic Models for Small Sample Sizes [0.4511923587827301]
We present a novel two stage architecture which couples visual feature learning with probabilistic models to represent each class.
We outperform a benchmark of an equivalent network with a Softmax head, obtaining increased accuracy for sample sizes smaller than 12 and increased weighted F1 score for 3 imbalanced class profiles.
arXiv Detail & Related papers (2022-12-01T15:19:42Z) - Prediction Calibration for Generalized Few-shot Semantic Segmentation [101.69940565204816]
Generalized Few-shot Semantic (GFSS) aims to segment each image pixel into either base classes with abundant training examples or novel classes with only a handful of (e.g., 1-5) training images per class.
We build a cross-attention module that guides the classifier's final prediction using the fused multi-level features.
Our PCN outperforms the state-the-art alternatives by large margins.
arXiv Detail & Related papers (2022-10-15T13:30:12Z) - Part-Based Models Improve Adversarial Robustness [57.699029966800644]
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks.
Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts.
Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations.
arXiv Detail & Related papers (2022-09-15T15:41:47Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - Class Balancing GAN with a Classifier in the Loop [58.29090045399214]
We introduce a novel theoretically motivated Class Balancing regularizer for training GANs.
Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset.
We demonstrate the utility of our regularizer in learning representations for long-tailed distributions via achieving better performance than existing approaches over multiple datasets.
arXiv Detail & Related papers (2021-06-17T11:41:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.