FHIST: A Benchmark for Few-shot Classification of Histological Images
- URL: http://arxiv.org/abs/2206.00092v1
- Date: Tue, 31 May 2022 20:03:40 GMT
- Title: FHIST: A Benchmark for Few-shot Classification of Histological Images
- Authors: Fereshteh Shakeri, Malik Boudiaf, Sina Mohammadi, Ivaxi Sheth,
Mohammad Havaei, Ismail Ben Ayed, Samira Ebrahimi Kahou
- Abstract summary: Few-shot learning has attracted wide interest in image classification, but almost all the current public benchmarks are focused on natural images.
This paper introduces a highly diversified public benchmark, gathered from various public datasets, for few-shot histology data classification.
We evaluate the performances of state-of-the-art few-shot learning methods on our benchmark, and observe that simple fine-tuning and regularization methods achieve better results than the popular meta-learning and episodic-training paradigm.
- Score: 20.182417148565147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot learning has recently attracted wide interest in image
classification, but almost all the current public benchmarks are focused on
natural images. The few-shot paradigm is highly relevant in medical-imaging
applications due to the scarcity of labeled data, as annotations are expensive
and require specialized expertise. However, in medical imaging, few-shot
learning research is sparse, limited to private data sets and is at its early
stage. In particular, the few-shot setting is of high interest in histology due
to the diversity and fine granularity of cancer related tissue classification
tasks, and the variety of data-preparation techniques. This paper introduces a
highly diversified public benchmark, gathered from various public datasets, for
few-shot histology data classification. We build few-shot tasks and
base-training data with various tissue types, different levels of domain shifts
stemming from various cancer sites, and different class-granularity levels,
thereby reflecting realistic scenarios. We evaluate the performances of
state-of-the-art few-shot learning methods on our benchmark, and observe that
simple fine-tuning and regularization methods achieve better results than the
popular meta-learning and episodic-training paradigm. Furthermore, we introduce
three scenarios based on the domain shifts between the source and target
histology data: near-domain, middle-domain and out-domain. Our experiments
display the potential of few-shot learning in histology classification, with
state-of-art few shot learning methods approaching the supervised-learning
baselines in the near-domain setting. In our out-domain setting, for 5-way
5-shot, the best performing method reaches 60% accuracy. We believe that our
work could help in building realistic evaluations and fair comparisons of
few-shot learning methods and will further encourage research in the few-shot
paradigm.
Related papers
- Cross-Domain Evaluation of Few-Shot Classification Models: Natural Images vs. Histopathological Images [2.364022147677265]
We first train several few-shot classification models on natural images and evaluate their performance on histopathology images.
We incorporated four histopathology datasets and one natural images dataset and assessed performance across 5-way 1-shot, 5-way 5-shot, and 5-way 10-shot scenarios.
arXiv Detail & Related papers (2024-10-11T18:25:52Z) - Few-Shot Histopathology Image Classification: Evaluating State-of-the-Art Methods and Unveiling Performance Insights [3.0830445241647313]
We have considered four histopathology datasets for few-shot histopathology image classification.
We have evaluated 5-way 1-shot, 5-way 5-shot and 5-way 10-shot scenarios with a set of state-of-the-art classification techniques.
The best methods have surpassed an accuracy of 70%, 80% and 85% in the cases of 5-way 1-shot, 5-way 5-shot and 5-way 10-shot cases, respectively.
arXiv Detail & Related papers (2024-08-25T12:17:05Z) - MedFMC: A Real-world Dataset and Benchmark For Foundation Model
Adaptation in Medical Image Classification [41.16626194300303]
Foundation models, often pre-trained with large-scale data, have achieved paramount success in jump-starting various vision and language applications.
Recent advances further enable adapting foundation models in downstream tasks efficiently using only a few training samples.
Yet, the application of such learning paradigms in medical image analysis remains scarce due to the shortage of publicly accessible data and benchmarks.
arXiv Detail & Related papers (2023-06-16T01:46:07Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders [50.689585476660554]
We propose a new fine-tuning strategy that includes positive-pair loss relaxation and random sentence sampling.
Our approach consistently improves overall zero-shot pathology classification across four chest X-ray datasets and three pre-trained models.
arXiv Detail & Related papers (2022-12-14T06:04:18Z) - Domain Agnostic Few-Shot Learning For Document Intelligence [4.243926243206826]
Few-shot learning aims to generalize to novel classes with only a few samples with class labels.
In this work, we address the problem of few-shot document image classification under domain shift.
arXiv Detail & Related papers (2021-10-29T03:19:31Z) - Learn to Ignore: Domain Adaptation for Multi-Site MRI Analysis [1.3079444139643956]
We present a novel method that learns to ignore the scanner-related features present in the images, while learning features relevant for the classification task.
Our method outperforms state-of-the-art domain adaptation methods on a classification task between Multiple Sclerosis patients and healthy subjects.
arXiv Detail & Related papers (2021-10-13T15:40:50Z) - Meta Navigator: Search for a Good Adaptation Policy for Few-shot
Learning [113.05118113697111]
Few-shot learning aims to adapt knowledge learned from previous tasks to novel tasks with only a limited amount of labeled data.
Research literature on few-shot learning exhibits great diversity, while different algorithms often excel at different few-shot learning scenarios.
We present Meta Navigator, a framework that attempts to solve the limitation in few-shot learning by seeking a higher-level strategy.
arXiv Detail & Related papers (2021-09-13T07:20:01Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Dataset Bias in Few-shot Image Recognition [57.25445414402398]
We first investigate the impact of transferable capabilities learned from base categories.
Second, we investigate performance differences on different datasets from dataset structures and different few-shot learning methods.
arXiv Detail & Related papers (2020-08-18T14:46:23Z) - Extending and Analyzing Self-Supervised Learning Across Domains [50.13326427158233]
Self-supervised representation learning has achieved impressive results in recent years.
Experiments primarily come on ImageNet or other similarly large internet imagery datasets.
We experiment with several popular methods on an unprecedented variety of domains.
arXiv Detail & Related papers (2020-04-24T21:18:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.