Edge-Based Self-Supervision for Semi-Supervised Few-Shot Microscopy
Image Cell Segmentation
- URL: http://arxiv.org/abs/2208.02105v1
- Date: Wed, 3 Aug 2022 14:35:00 GMT
- Title: Edge-Based Self-Supervision for Semi-Supervised Few-Shot Microscopy
Image Cell Segmentation
- Authors: Youssef Dawoud, Katharina Ernst, Gustavo Carneiro, Vasileios
Belagiannis
- Abstract summary: We propose the prediction of edge-based maps for self-supervising the training of the unlabelled images.
In our experiments, we show that only a small number of annotated images, e.g. 10% of the original training set, is enough for our approach to reach similar performance as with the fully annotated databases on 1- to 10-shots.
- Score: 16.94384366469512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks currently deliver promising results for microscopy image
cell segmentation, but they require large-scale labelled databases, which is a
costly and time-consuming process. In this work, we relax the labelling
requirement by combining self-supervised with semi-supervised learning. We
propose the prediction of edge-based maps for self-supervising the training of
the unlabelled images, which is combined with the supervised training of a
small number of labelled images for learning the segmentation task. In our
experiments, we evaluate on a few-shot microscopy image cell segmentation
benchmark and show that only a small number of annotated images, e.g. 10% of
the original training set, is enough for our approach to reach similar
performance as with the fully annotated databases on 1- to 10-shots. Our code
and trained models is made publicly available
Related papers
- Self-supervised dense representation learning for live-cell microscopy
with time arrow prediction [0.0]
We present a self-supervised method that learns dense image representations from raw, unlabeled live-cell microscopy videos.
We show that the resulting dense representations capture inherently time-asymmetric biological processes such as cell divisions on a pixel-level.
Our method outperforms supervised methods, particularly when only limited ground truth annotations are available.
arXiv Detail & Related papers (2023-05-09T14:58:13Z) - Learning with minimal effort: leveraging in silico labeling for cell and
nucleus segmentation [0.6465251961564605]
We propose to use In Silico Labeling (ISL) as a pretraining scheme for segmentation tasks.
By comparing segmentation performance across several training set sizes, we show that such a scheme can dramatically reduce the number of required annotations.
arXiv Detail & Related papers (2023-01-10T11:35:14Z) - Which Pixel to Annotate: a Label-Efficient Nuclei Segmentation Framework [70.18084425770091]
Deep neural networks have been widely applied in nuclei instance segmentation of H&E stained pathology images.
It is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns.
We propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner.
arXiv Detail & Related papers (2022-12-20T14:53:26Z) - Learning to Annotate Part Segmentation with Gradient Matching [58.100715754135685]
This paper focuses on tackling semi-supervised part segmentation tasks by generating high-quality images with a pre-trained GAN.
In particular, we formulate the annotator learning as a learning-to-learn problem.
We show that our method can learn annotators from a broad range of labelled images including real images, generated images, and even analytically rendered images.
arXiv Detail & Related papers (2022-11-06T01:29:22Z) - Efficient Self-Supervision using Patch-based Contrastive Learning for
Histopathology Image Segmentation [0.456877715768796]
We propose a framework for self-supervised image segmentation using contrastive learning on image patches.
A fully convolutional neural network (FCNN) is trained in a self-supervised manner to discern features in the input images.
The proposed model only consists of a simple FCNN with 10.8k parameters and requires about 5 minutes to converge on the high resolution microscopy datasets.
arXiv Detail & Related papers (2022-08-23T07:24:47Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Uncertainty guided semi-supervised segmentation of retinal layers in OCT
images [4.046207281399144]
We propose a novel uncertainty-guided semi-supervised learning based on a student-teacher approach for training the segmentation network.
The proposed framework is a key contribution and applicable for biomedical image segmentation across various imaging modalities.
arXiv Detail & Related papers (2021-03-02T23:14:25Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Few-Shot Microscopy Image Cell Segmentation [15.510258960276083]
Automatic cell segmentation in microscopy images works well with the support of deep neural networks trained with full supervision.
We propose the combination of three objective functions to segment the cells, move the segmentation results away from the classification boundary.
Our experiments on five public databases show promising results from 1- to 10-shot meta-learning.
arXiv Detail & Related papers (2020-06-29T12:12:10Z) - Self-Supervised Viewpoint Learning From Image Collections [116.56304441362994]
We propose a novel learning framework which incorporates an analysis-by-synthesis paradigm to reconstruct images in a viewpoint aware manner.
We show that our approach performs competitively to fully-supervised approaches for several object categories like human faces, cars, buses, and trains.
arXiv Detail & Related papers (2020-04-03T22:01:41Z) - CRNet: Cross-Reference Networks for Few-Shot Segmentation [59.85183776573642]
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images.
With a cross-reference mechanism, our network can better find the co-occurrent objects in the two images.
Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-03-24T04:55:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.