Supervision and Source Domain Impact on Representation Learning: A
Histopathology Case Study
- URL: http://arxiv.org/abs/2005.08629v1
- Date: Sun, 10 May 2020 21:27:38 GMT
- Title: Supervision and Source Domain Impact on Representation Learning: A
Histopathology Case Study
- Authors: Milad Sikaroudi, Amir Safarpoor, Benyamin Ghojogh, Sobhan Shafiei,
Mark Crowley, H.R. Tizhoosh
- Abstract summary: In this work, we explored the performance of a deep neural network and triplet loss in the area of representation learning.
We investigated the notion of similarity and dissimilarity in pathology whole-slide images and compared different setups from unsupervised and semi-supervised to supervised learning.
We achieved high accuracy and generalization when the learned representations were applied to two different pathology datasets.
- Score: 6.762603053858596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As many algorithms depend on a suitable representation of data, learning
unique features is considered a crucial task. Although supervised techniques
using deep neural networks have boosted the performance of representation
learning, the need for a large set of labeled data limits the application of
such methods. As an example, high-quality delineations of regions of interest
in the field of pathology is a tedious and time-consuming task due to the large
image dimensions. In this work, we explored the performance of a deep neural
network and triplet loss in the area of representation learning. We
investigated the notion of similarity and dissimilarity in pathology
whole-slide images and compared different setups from unsupervised and
semi-supervised to supervised learning in our experiments. Additionally,
different approaches were tested, applying few-shot learning on two publicly
available pathology image datasets. We achieved high accuracy and
generalization when the learned representations were applied to two different
pathology datasets.
Related papers
- Disease Classification and Impact of Pretrained Deep Convolution Neural Networks on Diverse Medical Imaging Datasets across Imaging Modalities [0.0]
This paper investigates the intricacies of using pretrained deep convolutional neural networks with transfer learning across diverse medical imaging datasets.
It shows that the use of pretrained models as fixed feature extractors yields poor performance irrespective of the datasets.
It is also found that deeper and more complex architectures did not necessarily result in the best performance.
arXiv Detail & Related papers (2024-08-30T04:51:19Z) - Two Approaches to Supervised Image Segmentation [55.616364225463066]
The present work develops comparison experiments between deep learning and multiset neurons approaches.
The deep learning approach confirmed its potential for performing image segmentation.
The alternative multiset methodology allowed for enhanced accuracy while requiring little computational resources.
arXiv Detail & Related papers (2023-07-19T16:42:52Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - Learning Representations with Contrastive Self-Supervised Learning for
Histopathology Applications [8.69535649683089]
We show how contrastive self-supervised learning can reduce the annotation effort within digital pathology.
Our results pave the way for realizing the full potential of self-supervised learning for histopathology applications.
arXiv Detail & Related papers (2021-12-10T16:08:57Z) - HistoTransfer: Understanding Transfer Learning for Histopathology [9.231495418218813]
We compare the performance of features extracted from networks trained on ImageNet and histopathology data.
We investigate if features learned using more complex networks lead to gain in performance.
arXiv Detail & Related papers (2021-06-13T18:55:23Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Factors of Influence for Transfer Learning across Diverse Appearance
Domains and Task Types [50.1843146606122]
A simple form of transfer learning is common in current state-of-the-art computer vision models.
Previous systematic studies of transfer learning have been limited and the circumstances in which it is expected to work are not fully understood.
In this paper we carry out an extensive experimental exploration of transfer learning across vastly different image domains.
arXiv Detail & Related papers (2021-03-24T16:24:20Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Federated Learning for Computational Pathology on Gigapixel Whole Slide
Images [4.035591045544291]
We introduce privacy-preserving federated learning for gigapixel whole slide images in computational pathology.
We evaluate our approach on two different diagnostic problems using thousands of histology whole slide images with only slide-level labels.
arXiv Detail & Related papers (2020-09-21T21:56:08Z) - Self-Path: Self-supervision for Classification of Pathology Images with
Limited Annotations [4.713391888104896]
We propose a self-supervised CNN approach to leverage unlabeled data for learning generalizable and domain invariant representations in pathology images.
We introduce novel domain specific self-supervision tasks that leverage contextual, multi-resolution and semantic features in pathology images.
arXiv Detail & Related papers (2020-08-12T21:02:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.