HistoTransfer: Understanding Transfer Learning for Histopathology
- URL: http://arxiv.org/abs/2106.07068v1
- Date: Sun, 13 Jun 2021 18:55:23 GMT
- Title: HistoTransfer: Understanding Transfer Learning for Histopathology
- Authors: Yash Sharma, Lubaina Ehsan, Sana Syed, Donald E. Brown
- Abstract summary: We compare the performance of features extracted from networks trained on ImageNet and histopathology data.
We investigate if features learned using more complex networks lead to gain in performance.
- Score: 9.231495418218813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advancement in digital pathology and artificial intelligence has enabled deep
learning-based computer vision techniques for automated disease diagnosis and
prognosis. However, WSIs present unique computational and algorithmic
challenges. WSIs are gigapixel-sized, making them infeasible to be used
directly for training deep neural networks. Hence, for modeling, a two-stage
approach is adopted: Patch representations are extracted first, followed by the
aggregation for WSI prediction. These approaches require detailed pixel-level
annotations for training the patch encoder. However, obtaining these
annotations is time-consuming and tedious for medical experts. Transfer
learning is used to address this gap and deep learning architectures
pre-trained on ImageNet are used for generating patch-level representation.
Even though ImageNet differs significantly from histopathology data,
pre-trained networks have been shown to perform impressively on histopathology
data. Also, progress in self-supervised and multi-task learning coupled with
the release of multiple histopathology data has led to the release of
histopathology-specific networks. In this work, we compare the performance of
features extracted from networks trained on ImageNet and histopathology data.
We use an attention pooling network over these extracted features for
slide-level aggregation. We investigate if features learned using more complex
networks lead to gain in performance. We use a simple top-k sampling approach
for fine-tuning framework and study the representation similarity between
frozen and fine-tuned networks using Centered Kernel Alignment. Further, to
examine if intermediate block representation is better suited for feature
extraction and ImageNet architectures are unnecessarily large for
histopathology, we truncate the blocks of ResNet18 and DenseNet121 and examine
the performance.
Related papers
- Slicer Networks [8.43960865813102]
We propose the Slicer Network, a novel architecture for medical image analysis.
The Slicer Network strategically refines and upsamples feature maps via a splatting-blurring-slicing process.
Experiments across different medical imaging applications have verified the Slicer Network's improved accuracy and efficiency.
arXiv Detail & Related papers (2024-01-18T09:50:26Z) - A Deep Learning-based Compression and Classification Technique for Whole
Slide Histopathology Images [0.31498833540989407]
We build an ensemble of neural networks that enables a compressive autoencoder in a supervised fashion to retain a denser and more meaningful representation of the input histology images.
We test the compressed images using transfer learning-based classifiers and show that they provide promising accuracy and classification performance.
arXiv Detail & Related papers (2023-05-11T22:20:05Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Self supervised contrastive learning for digital histopathology [0.0]
We use a contrastive self-supervised learning method called SimCLR that achieved state-of-the-art results on natural-scene images.
We find that combining multiple multi-organ datasets with different types of staining and resolution properties improves the quality of the learned features.
Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks.
arXiv Detail & Related papers (2020-11-27T19:18:45Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Graph Neural Networks for UnsupervisedDomain Adaptation of
Histopathological ImageAnalytics [22.04114134677181]
We present a novel method for the unsupervised domain adaptation for histological image analysis.
It is based on a backbone for embedding images into a feature space, and a graph neural layer for propa-gating the supervision signals of images with labels.
In experiments, our methodachieves state-of-the-art performance on four public datasets.
arXiv Detail & Related papers (2020-08-21T04:53:44Z) - HATNet: An End-to-End Holistic Attention Network for Diagnosis of Breast
Biopsy Images [39.82731558467617]
We introduce a novel attention-based network, the Holistic ATtention Network (HATNet) to classify breast biopsy images.
It uses self-attention to encode global information, allowing it to learn representations from clinically relevant tissue structures without any explicit supervision.
Our analysis reveals that HATNet learns representations from clinically relevant structures, and it matches the classification accuracy of human pathologists for this challenging test set.
arXiv Detail & Related papers (2020-07-25T20:42:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.