Improving colonoscopy lesion classification using semi-supervised deep
learning
- URL: http://arxiv.org/abs/2009.03162v1
- Date: Mon, 7 Sep 2020 15:25:35 GMT
- Title: Improving colonoscopy lesion classification using semi-supervised deep
learning
- Authors: Mayank Golhar, Taylor L. Bobrow, MirMilad Pourmousavi Khoshknab,
Simran Jit, Saowanee Ngamruengphong, Nicholas J. Durr
- Abstract summary: Recent work in semi-supervised learning has shown that meaningful representations of images can be obtained from training with large quantities of unlabeled data.
We demonstrate that an unsupervised jigsaw learning task, in combination with supervised training, results in up to a 9.8% improvement in correctly classifying lesions in colonoscopy images.
- Score: 2.568264809297699
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While data-driven approaches excel at many image analysis tasks, the
performance of these approaches is often limited by a shortage of annotated
data available for training. Recent work in semi-supervised learning has shown
that meaningful representations of images can be obtained from training with
large quantities of unlabeled data, and that these representations can improve
the performance of supervised tasks. Here, we demonstrate that an unsupervised
jigsaw learning task, in combination with supervised training, results in up to
a 9.8% improvement in correctly classifying lesions in colonoscopy images when
compared to a fully-supervised baseline. We additionally benchmark improvements
in domain adaptation and out-of-distribution detection, and demonstrate that
semi-supervised learning outperforms supervised learning in both cases. In
colonoscopy applications, these metrics are important given the skill required
for endoscopic assessment of lesions, the wide variety of endoscopy systems in
use, and the homogeneity that is typical of labeled datasets.
Related papers
- Multi-organ Self-supervised Contrastive Learning for Breast Lesion
Segmentation [0.0]
This paper employs multi-organ datasets for pre-training models tailored to specific organ-related target tasks.
Our target task is breast tumour segmentation in ultrasound images.
Results show that conventional contrastive learning pre-training improves performance compared to supervised baseline approaches.
arXiv Detail & Related papers (2024-02-21T20:29:21Z) - A Survey of the Impact of Self-Supervised Pretraining for Diagnostic
Tasks with Radiological Images [71.26717896083433]
Self-supervised pretraining has been observed to be effective at improving feature representations for transfer learning.
This review summarizes recent research into its usage in X-ray, computed tomography, magnetic resonance, and ultrasound imaging.
arXiv Detail & Related papers (2023-09-05T19:45:09Z) - Defect detection using weakly supervised learning [1.4321190258774352]
Weakly supervised learning techniques have gained significant attention in recent years as an alternative to traditional supervised learning.
In this paper, the performance of a weakly supervised classifier to its fully supervised counterpart is compared on the task of defect detection.
arXiv Detail & Related papers (2023-03-27T11:01:16Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Contrastive learning for unsupervised medical image clustering and
reconstruction [0.23624125155742057]
We propose an unsupervised autoencoder framework which is augmented with a contrastive loss to encourage high separability in the latent space.
Our method achieves similar performance to the supervised architecture, indicating that separation in the latent space reproduces expert medical observer-assigned labels.
arXiv Detail & Related papers (2022-09-24T13:17:02Z) - Dissecting Self-Supervised Learning Methods for Surgical Computer Vision [51.370873913181605]
Self-Supervised Learning (SSL) methods have begun to gain traction in the general computer vision community.
The effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored.
We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection.
arXiv Detail & Related papers (2022-07-01T14:17:11Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Evaluating the Robustness of Self-Supervised Learning in Medical Imaging [57.20012795524752]
Self-supervision has demonstrated to be an effective learning strategy when training target tasks on small annotated data-sets.
We show that networks trained via self-supervised learning have superior robustness and generalizability compared to fully-supervised learning in the context of medical imaging.
arXiv Detail & Related papers (2021-05-14T17:49:52Z) - Self-supervised driven consistency training for annotation efficient
histopathology image analysis [13.005873872821066]
Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology.
We propose a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning.
We also propose a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data.
arXiv Detail & Related papers (2021-02-07T19:46:21Z) - Semi-Automatic Data Annotation guided by Feature Space Projection [117.9296191012968]
We present a semi-automatic data annotation approach based on suitable feature space projection and semi-supervised label estimation.
We validate our method on the popular MNIST dataset and on images of human intestinal parasites with and without fecal impurities.
Our results demonstrate the added-value of visual analytics tools that combine complementary abilities of humans and machines for more effective machine learning.
arXiv Detail & Related papers (2020-07-27T17:03:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.