Learning with minimal effort: leveraging in silico labeling for cell and
nucleus segmentation
- URL: http://arxiv.org/abs/2301.03914v1
- Date: Tue, 10 Jan 2023 11:35:14 GMT
- Title: Learning with minimal effort: leveraging in silico labeling for cell and
nucleus segmentation
- Authors: Thomas Bonte, Maxence Philbert, Emeline Coleno, Edouard Bertrand,
Arthur Imbert and Thomas Walter
- Abstract summary: We propose to use In Silico Labeling (ISL) as a pretraining scheme for segmentation tasks.
By comparing segmentation performance across several training set sizes, we show that such a scheme can dramatically reduce the number of required annotations.
- Score: 0.6465251961564605
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep learning provides us with powerful methods to perform nucleus or cell
segmentation with unprecedented quality. However, these methods usually require
large training sets of manually annotated images, which are tedious and
expensive to generate. In this paper we propose to use In Silico Labeling (ISL)
as a pretraining scheme for segmentation tasks. The strategy is to acquire
label-free microscopy images (such as bright-field or phase contrast) along
fluorescently labeled images (such as DAPI or CellMask). We then train a model
to predict the fluorescently labeled images from the label-free microscopy
images. By comparing segmentation performance across several training set
sizes, we show that such a scheme can dramatically reduce the number of
required annotations.
Related papers
- DiffKillR: Killing and Recreating Diffeomorphisms for Cell Annotation in Dense Microscopy Images [105.46086313858062]
We introduce DiffKillR, a novel framework that reframes cell annotation as the combination of archetype matching and image registration tasks.
We will discuss the theoretical properties of DiffKillR and validate it on three microscopy tasks, demonstrating its advantages over existing supervised, semi-supervised, and unsupervised methods.
arXiv Detail & Related papers (2024-10-04T00:38:29Z) - Predicting fluorescent labels in label-free microscopy images with pix2pix and adaptive loss in Light My Cells challenge [12.373115873950296]
We propose a deep learning-based in silico labeling method for the Light My Cells challenge.
Our method achieves promising performance for in silico labeling.
arXiv Detail & Related papers (2024-06-22T03:10:23Z) - Leverage Weakly Annotation to Pixel-wise Annotation via Zero-shot
Segment Anything Model for Molecular-empowered Learning [4.722512095568422]
Building an AI model requires pixel-level annotations, which are often unscalable and must be done by skilled domain experts.
In this paper, we explore the potential of bypassing pixel-level delineation by employing the recent segment anything model (SAM) on weak box annotation.
Our findings show that the proposed SAM-assisted molecular-empowered learning (SAM-L) can diminish the labeling efforts for lay annotators by only requiring weak box annotations.
arXiv Detail & Related papers (2023-08-10T16:44:24Z) - Distilling Self-Supervised Vision Transformers for Weakly-Supervised
Few-Shot Classification & Segmentation [58.03255076119459]
We address the task of weakly-supervised few-shot image classification and segmentation, by leveraging a Vision Transformer (ViT)
Our proposed method takes token representations from the self-supervised ViT and leverages their correlations, via self-attention, to produce classification and segmentation predictions.
Experiments on Pascal-5i and COCO-20i demonstrate significant performance gains in a variety of supervision settings.
arXiv Detail & Related papers (2023-07-07T06:16:43Z) - Which Pixel to Annotate: a Label-Efficient Nuclei Segmentation Framework [70.18084425770091]
Deep neural networks have been widely applied in nuclei instance segmentation of H&E stained pathology images.
It is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns.
We propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner.
arXiv Detail & Related papers (2022-12-20T14:53:26Z) - Learning to Annotate Part Segmentation with Gradient Matching [58.100715754135685]
This paper focuses on tackling semi-supervised part segmentation tasks by generating high-quality images with a pre-trained GAN.
In particular, we formulate the annotator learning as a learning-to-learn problem.
We show that our method can learn annotators from a broad range of labelled images including real images, generated images, and even analytically rendered images.
arXiv Detail & Related papers (2022-11-06T01:29:22Z) - Edge-Based Self-Supervision for Semi-Supervised Few-Shot Microscopy
Image Cell Segmentation [16.94384366469512]
We propose the prediction of edge-based maps for self-supervising the training of the unlabelled images.
In our experiments, we show that only a small number of annotated images, e.g. 10% of the original training set, is enough for our approach to reach similar performance as with the fully annotated databases on 1- to 10-shots.
arXiv Detail & Related papers (2022-08-03T14:35:00Z) - Self Supervised Learning for Few Shot Hyperspectral Image Classification [57.2348804884321]
We propose to leverage Self Supervised Learning (SSL) for HSI classification.
We show that by pre-training an encoder on unlabeled pixels using Barlow-Twins, a state-of-the-art SSL algorithm, we can obtain accurate models with a handful of labels.
arXiv Detail & Related papers (2022-06-24T07:21:53Z) - Semi-supervised Contrastive Learning for Label-efficient Medical Image
Segmentation [11.935891325600952]
We propose a supervised local contrastive loss that leverages limited pixel-wise annotation to force pixels with the same label to gather around in the embedding space.
With different amounts of labeled data, our methods consistently outperform the state-of-the-art contrast-based methods and other semi-supervised learning techniques.
arXiv Detail & Related papers (2021-09-15T16:23:48Z) - Self-Ensembling Contrastive Learning for Semi-Supervised Medical Image
Segmentation [6.889911520730388]
We aim to boost the performance of semi-supervised learning for medical image segmentation with limited labels.
We learn latent representations directly at feature-level by imposing contrastive loss on unlabeled images.
We conduct experiments on an MRI and a CT segmentation dataset and demonstrate that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T03:27:58Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.