CASS: Cross Architectural Self-Supervision for Medical Image Analysis
- URL: http://arxiv.org/abs/2206.04170v2
- Date: Fri, 10 Jun 2022 04:30:50 GMT
- Title: CASS: Cross Architectural Self-Supervision for Medical Image Analysis
- Authors: Pranav Singh, Elena Sizikova, Jacopo Cirrone
- Abstract summary: Cross Architectural Self-Supervision is a novel self-supervised learning approach which leverages transformers and CNN simultaneously.
Compared to existing state-of-the-art self-supervised learning approaches, we empirically show CASS trained CNNs, and Transformers gained an average of 8.5% with 100% labelled data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in Deep Learning and Computer Vision have alleviated many of
the bottlenecks, allowing algorithms to be label-free with better performance.
Specifically, Transformers provide a global perspective of the image, which
Convolutional Neural Networks (CNN) lack by design. Here we present Cross
Architectural Self-Supervision, a novel self-supervised learning approach which
leverages transformers and CNN simultaneously, while also being computationally
accessible to general practitioners via easily available cloud services.
Compared to existing state-of-the-art self-supervised learning approaches, we
empirically show CASS trained CNNs, and Transformers gained an average of 8.5%
with 100% labelled data, 7.3% with 10% labelled data, and 11.5% with 1%
labelled data, across three diverse datasets. Notably, one of the employed
datasets included histopathology slides of an autoimmune disease, a topic
underrepresented in Medical Imaging and has minimal data. In addition, our
findings reveal that CASS is twice as efficient as other state-of-the-art
methods in terms of training time.
Related papers
- Efficient Representation Learning for Healthcare with
Cross-Architectural Self-Supervision [5.439020425819001]
We present Cross Architectural - Self Supervision (CASS) in response to this challenge.
We show that CASS-trained CNNs and Transformers outperform existing self-supervised learning methods across four diverse healthcare datasets.
We also demonstrate that CASS is considerably more robust to variations in batch size and pretraining epochs, making it a suitable candidate for machine learning in healthcare applications.
arXiv Detail & Related papers (2023-08-19T15:57:19Z) - One-Shot Learning for Periocular Recognition: Exploring the Effect of
Domain Adaptation and Data Bias on Deep Representations [59.17685450892182]
We investigate the behavior of deep representations in widely used CNN models under extreme data scarcity for One-Shot periocular recognition.
We improved state-of-the-art results that made use of networks trained with biometric datasets with millions of images.
Traditional algorithms like SIFT can outperform CNNs in situations with limited data.
arXiv Detail & Related papers (2023-07-11T09:10:16Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Cross-Architectural Positive Pairs improve the effectiveness of
Self-Supervised Learning [0.0]
Cross Architectural - Self Supervision (CASS) is a novel self-supervised learning approach that leverages Transformer and CNN simultaneously.
We show that CASS-trained CNNs and Transformers across four diverse datasets gained an average of 3.8% with 1% labeled data.
We also show that CASS is much more robust to changes in batch size and training epochs than existing state-of-the-art self-supervised learning approaches.
arXiv Detail & Related papers (2023-01-27T23:27:24Z) - Learning from few examples: Classifying sex from retinal images via deep
learning [3.9146761527401424]
We showcase results for the performance of DL on small datasets to classify patient sex from fundus images.
Our models, developed using approximately 2500 fundus images, achieved test AUC scores of up to 0.72.
This corresponds to a mere 25% decrease in performance despite a nearly 1000-fold decrease in the dataset size.
arXiv Detail & Related papers (2022-07-20T02:47:29Z) - Robust and Efficient Medical Imaging with Self-Supervision [80.62711706785834]
We present REMEDIS, a unified representation learning strategy to improve robustness and data-efficiency of medical imaging AI.
We study a diverse range of medical imaging tasks and simulate three realistic application scenarios using retrospective data.
arXiv Detail & Related papers (2022-05-19T17:34:18Z) - Fusion of CNNs and statistical indicators to improve image
classification [65.51757376525798]
Convolutional Networks have dominated the field of computer vision for the last ten years.
Main strategy to prolong this trend relies on further upscaling networks in size.
We hypothesise that adding heterogeneous sources of information may be more cost-effective to a CNN than building a bigger network.
arXiv Detail & Related papers (2020-12-20T23:24:31Z) - Self supervised contrastive learning for digital histopathology [0.0]
We use a contrastive self-supervised learning method called SimCLR that achieved state-of-the-art results on natural-scene images.
We find that combining multiple multi-organ datasets with different types of staining and resolution properties improves the quality of the learned features.
Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks.
arXiv Detail & Related papers (2020-11-27T19:18:45Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.