How Much Off-The-Shelf Knowledge Is Transferable From Natural Images To
Pathology Images?
- URL: http://arxiv.org/abs/2005.01609v3
- Date: Sat, 9 May 2020 01:44:42 GMT
- Title: How Much Off-The-Shelf Knowledge Is Transferable From Natural Images To
Pathology Images?
- Authors: Xingyu Li, Konstantinos N. Plataniotis
- Abstract summary: Recent studies exploit transfer learning to reuse knowledge gained from natural images in pathology image analysis.
This paper proposes a framework to quantify knowledge gain by a particular layer, conducts an empirical investigation in pathology image centered transfer learning.
The general representation generated by early layers does convey transferred knowledge in various image classification applications.
- Score: 36.009216029815555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has achieved a great success in natural image classification.
To overcome data-scarcity in computational pathology, recent studies exploit
transfer learning to reuse knowledge gained from natural images in pathology
image analysis, aiming to build effective pathology image diagnosis models.
Since transferability of knowledge heavily depends on the similarity of the
original and target tasks, significant differences in image content and
statistics between pathology images and natural images raise the questions: how
much knowledge is transferable? Is the transferred information equally
contributed by pre-trained layers? To answer these questions, this paper
proposes a framework to quantify knowledge gain by a particular layer, conducts
an empirical investigation in pathology image centered transfer learning, and
reports some interesting observations. Particularly, compared to the
performance baseline obtained by random-weight model, though transferability of
off-the-shelf representations from deep layers heavily depend on specific
pathology image sets, the general representation generated by early layers does
convey transferred knowledge in various image classification applications. The
observation in this study encourages further investigation of specific metric
and tools to quantify effectiveness and feasibility of transfer learning in
future.
Related papers
- Knowledge-enhanced Visual-Language Pretraining for Computational Pathology [68.6831438330526]
We consider the problem of visual representation learning for computational pathology, by exploiting large-scale image-text pairs gathered from public resources.
We curate a pathology knowledge tree that consists of 50,470 informative attributes for 4,718 diseases requiring pathology diagnosis from 32 human tissues.
arXiv Detail & Related papers (2024-04-15T17:11:25Z) - Adversarial-Robust Transfer Learning for Medical Imaging via Domain
Assimilation [17.46080957271494]
The scarcity of publicly available medical images has led contemporary algorithms to depend on pretrained models grounded on a large set of natural images.
A significant em domain discrepancy exists between natural and medical images, which causes AI models to exhibit heightened em vulnerability to adversarial attacks.
This paper proposes a em domain assimilation approach that introduces texture and color adaptation into transfer learning, followed by a texture preservation component to suppress undesired distortion.
arXiv Detail & Related papers (2024-02-25T06:39:15Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Causality-Driven One-Shot Learning for Prostate Cancer Grading from MRI [1.049712834719005]
We present a novel method to automatically classify medical images that learns and leverages weak causal signals in the image.
Our framework consists of a convolutional neural network backbone and a causality-extractor module.
Our findings show that causal relationships among features play a crucial role in enhancing the model's ability to discern relevant information.
arXiv Detail & Related papers (2023-09-19T16:08:33Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - What Makes Transfer Learning Work For Medical Images: Feature Reuse &
Other Factors [1.5207770161985628]
It is unclear what factors determine whether - and to what extent - transfer learning to the medical domain is useful.
We explore the relationship between transfer learning, data size, the capacity and inductive bias of the model, as well as the distance between the source and target domain.
arXiv Detail & Related papers (2022-03-02T10:13:11Z) - HistoKT: Cross Knowledge Transfer in Computational Pathology [31.14107299224401]
The lack of well-annotated datasets in computational pathology (CPath) obstructs the application of deep learning techniques for classifying medical images.
Most transfer learning research follows a model-centric approach, tuning network parameters to improve transfer results over few datasets.
arXiv Detail & Related papers (2022-01-27T00:34:19Z) - Factors of Influence for Transfer Learning across Diverse Appearance
Domains and Task Types [50.1843146606122]
A simple form of transfer learning is common in current state-of-the-art computer vision models.
Previous systematic studies of transfer learning have been limited and the circumstances in which it is expected to work are not fully understood.
In this paper we carry out an extensive experimental exploration of transfer learning across vastly different image domains.
arXiv Detail & Related papers (2021-03-24T16:24:20Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z) - Supervision and Source Domain Impact on Representation Learning: A
Histopathology Case Study [6.762603053858596]
In this work, we explored the performance of a deep neural network and triplet loss in the area of representation learning.
We investigated the notion of similarity and dissimilarity in pathology whole-slide images and compared different setups from unsupervised and semi-supervised to supervised learning.
We achieved high accuracy and generalization when the learned representations were applied to two different pathology datasets.
arXiv Detail & Related papers (2020-05-10T21:27:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.