Self-Supervised Versus Supervised Training for Segmentation of Organoid
Images
- URL: http://arxiv.org/abs/2311.11198v1
- Date: Sun, 19 Nov 2023 01:57:55 GMT
- Title: Self-Supervised Versus Supervised Training for Segmentation of Organoid
Images
- Authors: Asmaa Haja, Eric Brouwer and Lambert Schomaker
- Abstract summary: Large amounts of microscopic image data sets remain unlabeled, preventing their effective exploitation using deep-learning algorithms.
Self-supervised learning (SSL) is a promising solution based on learning intrinsic features under a pretext task that is similar to the main task without requiring labels.
A ResNet50 U-Net was first trained to restore images of liver progenitor organoids from augmented images using the Structural Similarity Index Metric (SSIM), alone, and using SSIM combined with L1 loss.
For comparison, we used the same U-Net architecture to train two supervised models, one utilizing the ResNet50 encoder
- Score: 2.6242820867975127
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The process of annotating relevant data in the field of digital microscopy
can be both time-consuming and especially expensive due to the required
technical skills and human-expert knowledge. Consequently, large amounts of
microscopic image data sets remain unlabeled, preventing their effective
exploitation using deep-learning algorithms. In recent years it has been shown
that a lot of relevant information can be drawn from unlabeled data.
Self-supervised learning (SSL) is a promising solution based on learning
intrinsic features under a pretext task that is similar to the main task
without requiring labels. The trained result is transferred to the main task -
image segmentation in our case. A ResNet50 U-Net was first trained to restore
images of liver progenitor organoids from augmented images using the Structural
Similarity Index Metric (SSIM), alone, and using SSIM combined with L1 loss.
Both the encoder and decoder were trained in tandem. The weights were
transferred to another U-Net model designed for segmentation with frozen
encoder weights, using Binary Cross Entropy, Dice, and Intersection over Union
(IoU) losses. For comparison, we used the same U-Net architecture to train two
supervised models, one utilizing the ResNet50 encoder as well as a simple CNN.
Results showed that self-supervised learning models using a 25\% pixel drop or
image blurring augmentation performed better than the other augmentation
techniques using the IoU loss. When trained on only 114 images for the main
task, the self-supervised learning approach outperforms the supervised method
achieving an F1-score of 0.85, with higher stability, in contrast to an F1=0.78
scored by the supervised method. Furthermore, when trained with larger data
sets (1,000 images), self-supervised learning is still able to perform better,
achieving an F1-score of 0.92, contrasting to a score of 0.85 for the
supervised method.
Related papers
- Class Anchor Margin Loss for Content-Based Image Retrieval [97.81742911657497]
We propose a novel repeller-attractor loss that falls in the metric learning paradigm, yet directly optimize for the L2 metric without the need of generating pairs.
We evaluate the proposed objective in the context of few-shot and full-set training on the CBIR task, by using both convolutional and transformer architectures.
arXiv Detail & Related papers (2023-06-01T12:53:10Z) - DeSTSeg: Segmentation Guided Denoising Student-Teacher for Anomaly
Detection [18.95747313320397]
We propose an improved model called DeSTSeg, which integrates a pre-trained teacher network, a denoising student encoder-decoder, and a segmentation network into one framework.
Our method achieves state-of-the-art performance, 98.6% on image-level AUC, 75.8% on pixel-level average precision, and 76.4% on instance-level average precision.
arXiv Detail & Related papers (2022-11-21T10:01:03Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - Masked Unsupervised Self-training for Zero-shot Image Classification [98.23094305347709]
Masked Unsupervised Self-Training (MUST) is a new approach which leverages two different and complimentary sources of supervision: pseudo-labels and raw images.
MUST improves upon CLIP by a large margin and narrows the performance gap between unsupervised and supervised classification.
arXiv Detail & Related papers (2022-06-07T02:03:06Z) - Corrupted Image Modeling for Self-Supervised Visual Pre-Training [103.99311611776697]
We introduce Corrupted Image Modeling (CIM) for self-supervised visual pre-training.
CIM uses an auxiliary generator with a small trainable BEiT to corrupt the input image instead of using artificial mask tokens.
After pre-training, the enhancer can be used as a high-capacity visual encoder for downstream tasks.
arXiv Detail & Related papers (2022-02-07T17:59:04Z) - Self-Supervised Pre-Training for Transformer-Based Person
Re-Identification [54.55281692768765]
Transformer-based supervised pre-training achieves great performance in person re-identification (ReID)
Due to the domain gap between ImageNet and ReID datasets, it usually needs a larger pre-training dataset to boost the performance.
This work aims to mitigate the gap between the pre-training and ReID datasets from the perspective of data and model structure.
arXiv Detail & Related papers (2021-11-23T18:59:08Z) - Continual Contrastive Self-supervised Learning for Image Classification [10.070132585425938]
Self-supervise learning method shows tremendous potential on visual representation without any labeled data at scale.
To improve the visual representation of self-supervised learning, larger and more varied data is needed.
In this paper, we make the first attempt to implement the continual contrastive self-supervised learning by proposing a rehearsal method.
arXiv Detail & Related papers (2021-07-05T03:53:42Z) - AugNet: End-to-End Unsupervised Visual Representation Learning with
Image Augmentation [3.6790362352712873]
We propose AugNet, a new deep learning training paradigm to learn image features from a collection of unlabeled pictures.
Our experiments demonstrate that the method is able to represent the image in low dimensional space.
Unlike many deep-learning-based image retrieval algorithms, our approach does not require access to external annotated datasets.
arXiv Detail & Related papers (2021-06-11T09:02:30Z) - Self supervised contrastive learning for digital histopathology [0.0]
We use a contrastive self-supervised learning method called SimCLR that achieved state-of-the-art results on natural-scene images.
We find that combining multiple multi-organ datasets with different types of staining and resolution properties improves the quality of the learned features.
Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks.
arXiv Detail & Related papers (2020-11-27T19:18:45Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.