Self-Supervised-RCNN for Medical Image Segmentation with Limited Data
Annotation
- URL: http://arxiv.org/abs/2207.11191v1
- Date: Sun, 17 Jul 2022 13:28:52 GMT
- Title: Self-Supervised-RCNN for Medical Image Segmentation with Limited Data
Annotation
- Authors: Banafshe Felfeliyan, Abhilash Hareendranathan, Gregor Kuntze, David
Cornell, Nils D. Forkert, Jacob L. Jaremko, and Janet L. Ronsky
- Abstract summary: We propose an alternative deep learning training strategy based on self-supervised pretraining on unlabeled MRI scans.
Our pretraining approach first, randomly applies different distortions to random areas of unlabeled images and then predicts the type of distortions and loss of information.
The effectiveness of the proposed method for segmentation tasks in different pre-training and fine-tuning scenarios is evaluated.
- Score: 0.16490701092527607
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Many successful methods developed for medical image analysis that are based
on machine learning use supervised learning approaches, which often require
large datasets annotated by experts to achieve high accuracy. However, medical
data annotation is time-consuming and expensive, especially for segmentation
tasks. To solve the problem of learning with limited labeled medical image
data, an alternative deep learning training strategy based on self-supervised
pretraining on unlabeled MRI scans is proposed in this work. Our pretraining
approach first, randomly applies different distortions to random areas of
unlabeled images and then predicts the type of distortions and loss of
information. To this aim, an improved version of Mask-RCNN architecture has
been adapted to localize the distortion location and recover the original image
pixels. The effectiveness of the proposed method for segmentation tasks in
different pre-training and fine-tuning scenarios is evaluated based on the
Osteoarthritis Initiative dataset. Using this self-supervised pretraining
method improved the Dice score by 20% compared to training from scratch. The
proposed self-supervised learning is simple, effective, and suitable for
different ranges of medical image analysis tasks including anomaly detection,
segmentation, and classification.
Related papers
- Self-Supervised Pre-Training with Contrastive and Masked Autoencoder
Methods for Dealing with Small Datasets in Deep Learning for Medical Imaging [8.34398674359296]
Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis.
Training such deep learning models requires large and accurate datasets, with annotations for all training samples.
To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning.
arXiv Detail & Related papers (2023-08-12T11:31:01Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Self-supervised Model Based on Masked Autoencoders Advance CT Scans
Classification [0.0]
This paper is inspired by the self-supervised learning algorithm MAE.
It uses the MAE model pre-trained on ImageNet to perform transfer learning on CT Scans dataset.
This method improves the generalization performance of the model and avoids the risk of overfitting on small datasets.
arXiv Detail & Related papers (2022-10-11T00:52:05Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - About Explicit Variance Minimization: Training Neural Networks for
Medical Imaging With Limited Data Annotations [2.3204178451683264]
Variance Aware Training (VAT) method exploits this property by introducing the variance error into the model loss function.
We validate VAT on three medical imaging datasets from diverse domains and various learning objectives.
arXiv Detail & Related papers (2021-05-28T21:34:04Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.