HistoKT: Cross Knowledge Transfer in Computational Pathology
- URL: http://arxiv.org/abs/2201.11246v1
- Date: Thu, 27 Jan 2022 00:34:19 GMT
- Title: HistoKT: Cross Knowledge Transfer in Computational Pathology
- Authors: Ryan Zhang and Jiadai Zhu and Stephen Yang and Mahdi S. Hosseini and
Angelo Genovese and Lina Chen and Corwyn Rowsell and Savvas Damaskinos and
Sonal Varma and Konstantinos N. Plataniotis
- Abstract summary: The lack of well-annotated datasets in computational pathology (CPath) obstructs the application of deep learning techniques for classifying medical images.
Most transfer learning research follows a model-centric approach, tuning network parameters to improve transfer results over few datasets.
- Score: 31.14107299224401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The lack of well-annotated datasets in computational pathology (CPath)
obstructs the application of deep learning techniques for classifying medical
images. %Since pathologist time is expensive, dataset curation is intrinsically
difficult. Many CPath workflows involve transferring learned knowledge between
various image domains through transfer learning. Currently, most transfer
learning research follows a model-centric approach, tuning network parameters
to improve transfer results over few datasets. In this paper, we take a
data-centric approach to the transfer learning problem and examine the
existence of generalizable knowledge between histopathological datasets. First,
we create a standardization workflow for aggregating existing histopathological
data. We then measure inter-domain knowledge by training ResNet18 models across
multiple histopathological datasets, and cross-transferring between them to
determine the quantity and quality of innate shared knowledge. Additionally, we
use weight distillation to share knowledge between models without additional
training. We find that hard to learn, multi-class datasets benefit most from
pretraining, and a two stage learning framework incorporating a large source
domain such as ImageNet allows for better utilization of smaller datasets.
Furthermore, we find that weight distillation enables models trained on purely
histopathological features to outperform models using external natural image
data.
Related papers
- Transfer Learning between Motor Imagery Datasets using Deep Learning --
Validation of Framework and Comparison of Datasets [0.0]
We present a simple deep learning-based framework commonly used in computer vision.
We demonstrate its effectiveness for cross-dataset transfer learning in mental imagery decoding tasks.
arXiv Detail & Related papers (2023-09-04T20:58:57Z) - Pick the Best Pre-trained Model: Towards Transferability Estimation for
Medical Image Segmentation [20.03177073703528]
Transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task.
We propose a new Transferability Estimation (TE) method for medical image segmentation.
Our method surpasses all current algorithms for transferability estimation in medical image segmentation.
arXiv Detail & Related papers (2023-07-22T01:58:18Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - Terrain Classification using Transfer Learning on Hyperspectral Images:
A Comparative study [0.13999481573773068]
convolutional neural network (CNN) and the Multi-Layer Perceptron (MLP) have been proven to be an effective method of image classification.
However, they suffer from the issues of long training time and requirement of large amounts of the labeled data.
We propose using the method of transfer learning to decrease the training time and reduce the dependence on large labeled dataset.
arXiv Detail & Related papers (2022-06-19T14:36:33Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - How Well Do Sparse Imagenet Models Transfer? [75.98123173154605]
Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" datasets.
In this work, we perform an in-depth investigation of this phenomenon in the context of convolutional neural networks (CNNs) trained on the ImageNet dataset.
We show that sparse models can match or even outperform the transfer performance of dense models, even at high sparsities.
arXiv Detail & Related papers (2021-11-26T11:58:51Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Self supervised contrastive learning for digital histopathology [0.0]
We use a contrastive self-supervised learning method called SimCLR that achieved state-of-the-art results on natural-scene images.
We find that combining multiple multi-organ datasets with different types of staining and resolution properties improves the quality of the learned features.
Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks.
arXiv Detail & Related papers (2020-11-27T19:18:45Z) - Siloed Federated Learning for Multi-Centric Histopathology Datasets [0.17842332554022694]
This paper proposes a novel federated learning approach for deep learning architectures in the medical domain.
Local-statistic batch normalization (BN) layers are introduced, resulting in collaboratively-trained, yet center-specific models.
We benchmark the proposed method on the classification of tumorous histopathology image patches extracted from the Camelyon16 and Camelyon17 datasets.
arXiv Detail & Related papers (2020-08-17T15:49:30Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.