Multi-task pre-training of deep neural networks for digital pathology
- URL: http://arxiv.org/abs/2005.02561v2
- Date: Thu, 7 May 2020 08:16:31 GMT
- Title: Multi-task pre-training of deep neural networks for digital pathology
- Authors: Romain Mormont, Pierre Geurts, Rapha\"el Mar\'ee
- Abstract summary: We first assemble and transform many digital pathology datasets into a pool of 22 classification tasks and almost 900k images.
We show that our models used as feature extractors either improve significantly over ImageNet pre-trained models or provide comparable performance.
- Score: 8.74883469030132
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we investigate multi-task learning as a way of pre-training
models for classification tasks in digital pathology. It is motivated by the
fact that many small and medium-size datasets have been released by the
community over the years whereas there is no large scale dataset similar to
ImageNet in the domain. We first assemble and transform many digital pathology
datasets into a pool of 22 classification tasks and almost 900k images. Then,
we propose a simple architecture and training scheme for creating a
transferable model and a robust evaluation and selection protocol in order to
evaluate our method. Depending on the target task, we show that our models used
as feature extractors either improve significantly over ImageNet pre-trained
models or provide comparable performance. Fine-tuning improves performance over
feature extraction and is able to recover the lack of specificity of ImageNet
features, as both pre-training sources yield comparable performance.
Related papers
- Self-Supervised Learning in Deep Networks: A Pathway to Robust Few-Shot Classification [0.0]
We first pre-train the model with self-supervision to enable it to learn common feature expressions on a large amount of unlabeled data.
Then fine-tune it on the few-shot dataset Mini-ImageNet to improve the model's accuracy and generalization ability under limited data.
arXiv Detail & Related papers (2024-11-19T01:01:56Z) - Enhancing pretraining efficiency for medical image segmentation via transferability metrics [0.0]
In medical image segmentation tasks, the scarcity of labeled training data poses a significant challenge.
We introduce a novel transferability metric, based on contrastive learning, that measures how robustly a pretrained model is able to represent the target data.
arXiv Detail & Related papers (2024-10-24T12:11:52Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining [73.81862342673894]
Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks.
transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks.
We conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection.
Our models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection.
arXiv Detail & Related papers (2024-03-20T09:17:22Z) - An evaluation of pre-trained models for feature extraction in image
classification [0.0]
This work aims to compare the performance of different pre-trained neural networks for feature extraction in image classification tasks.
Our results demonstrate that the best general performance along the datasets was achieved by CLIP-ViT-B and ViT-H-14, where the CLIP-ResNet50 model had similar performance but with less variability.
arXiv Detail & Related papers (2023-10-03T13:28:14Z) - DreamTeacher: Pretraining Image Backbones with Deep Generative Models [103.62397699392346]
We introduce a self-supervised feature representation learning framework that utilizes generative networks for pre-training downstream image backbones.
We investigate two types of knowledge distillation: 1) distilling learned generative features onto target image backbones as an alternative to pretraining these backbones on large labeled datasets such as ImageNet.
We empirically find that our DreamTeacher significantly outperforms existing self-supervised representation learning approaches across the board.
arXiv Detail & Related papers (2023-07-14T17:17:17Z) - Multi-domain learning CNN model for microscopy image classification [3.2835754110596236]
We present a multi-domain learning architecture for the classification of microscopy images.
Unlike previous methods that are computationally intensive, we have developed a compact model, called Mobincep.
It surpasses state-of-the-art results and is robust for limited labeled data.
arXiv Detail & Related papers (2023-04-20T19:32:23Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Image Augmentation for Multitask Few-Shot Learning: Agricultural Domain
Use-Case [0.0]
This paper challenges small and imbalanced datasets based on the example of a plant phenomics domain.
We introduce an image augmentation framework, which enables us to extremely enlarge the number of training samples.
We prove that our augmentation method increases model performance when only a few training samples are available.
arXiv Detail & Related papers (2021-02-24T14:08:34Z) - Pre-Trained Image Processing Transformer [95.93031793337613]
We develop a new pre-trained model, namely, image processing transformer (IPT)
We present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs.
IPT model is trained on these images with multi-heads and multi-tails.
arXiv Detail & Related papers (2020-12-01T09:42:46Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.