Self-supervised pseudo-colorizing of masked cells
- URL: http://arxiv.org/abs/2302.05968v2
- Date: Mon, 28 Aug 2023 11:14:31 GMT
- Title: Self-supervised pseudo-colorizing of masked cells
- Authors: Royden Wagner, Carlos Fernandez Lopez, Christoph Stiller
- Abstract summary: We introduce a novel self-supervision objective for the analysis of cells in biomedical microscopy images.
We propose training deep learning models to pseudo-colorize masked cells.
Our experiments reveal that approximating semantic segmentation by pseudo-colorization is beneficial for subsequent fine-tuning on cell detection.
- Score: 18.843372840624077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-supervised learning, which is strikingly referred to as the dark matter
of intelligence, is gaining more attention in biomedical applications of deep
learning. In this work, we introduce a novel self-supervision objective for the
analysis of cells in biomedical microscopy images. We propose training deep
learning models to pseudo-colorize masked cells. We use a physics-informed
pseudo-spectral colormap that is well suited for colorizing cell topology. Our
experiments reveal that approximating semantic segmentation by
pseudo-colorization is beneficial for subsequent fine-tuning on cell detection.
Inspired by the recent success of masked image modeling, we additionally mask
out cell parts and train to reconstruct these parts to further enrich the
learned representations. We compare our pre-training method with
self-supervised frameworks including contrastive learning (SimCLR), masked
autoencoders (MAEs), and edge-based self-supervision. We build upon our
previous work and train hybrid models for cell detection, which contain both
convolutional and vision transformer modules. Our pre-training method can
outperform SimCLR, MAE-like masked image modeling, and edge-based
self-supervision when pre-training on a diverse set of six fluorescence
microscopy datasets. Code is available at:
https://github.com/roydenwa/pseudo-colorize-masked-cells
Related papers
- IDCIA: Immunocytochemistry Dataset for Cellular Image Analysis [0.5057850174013127]
We present a new annotated microscopic cellular image dataset to improve the effectiveness of machine learning methods for cellular image analysis.
Our dataset includes microscopic images of cells, and for each image, the cell count and the location of individual cells.
arXiv Detail & Related papers (2024-11-13T19:33:08Z) - Grad-CAMO: Learning Interpretable Single-Cell Morphological Profiles from 3D Cell Painting Images [0.0]
We introduce Grad-CAMO, a novel single-cell interpretability score for supervised feature extractors.
Grad-CAMO measures the proportion of a model's attention that is concentrated on the cell of interest versus the background.
arXiv Detail & Related papers (2024-03-26T11:48:37Z) - Learning Nuclei Representations with Masked Image Modelling [0.41998444721319206]
Masked image modelling (MIM) is a powerful self-supervised representation learning paradigm.
We show the capacity of MIM to capture rich semantic representations of Haemotoxylin & Eosin (H&E)-stained images at the nuclear level.
arXiv Detail & Related papers (2023-06-29T17:20:05Z) - Not All Image Regions Matter: Masked Vector Quantization for
Autoregressive Image Generation [78.13793505707952]
Existing autoregressive models follow the two-stage generation paradigm that first learns a codebook in the latent space for image reconstruction and then completes the image generation autoregressively based on the learned codebook.
We propose a novel two-stage framework, which consists of Masked Quantization VAE (MQ-VAE) Stack model from modeling redundancy.
arXiv Detail & Related papers (2023-05-23T02:15:53Z) - Improving Masked Autoencoders by Learning Where to Mask [65.89510231743692]
Masked image modeling is a promising self-supervised learning method for visual data.
We present AutoMAE, a framework that uses Gumbel-Softmax to interlink an adversarially-trained mask generator and a mask-guided image modeling process.
In our experiments, AutoMAE is shown to provide effective pretraining models on standard self-supervised benchmarks and downstream tasks.
arXiv Detail & Related papers (2023-03-12T05:28:55Z) - Unsupervised Deep Digital Staining For Microscopic Cell Images Via
Knowledge Distillation [46.006296303296544]
It is difficult to obtain large-scale stained/unstained cell image pairs in practice.
We propose a novel unsupervised deep learning framework for the digital staining of cell images.
We show that the proposed unsupervised deep staining method can generate stained images with more accurate positions and shapes of the cell targets.
arXiv Detail & Related papers (2023-03-03T16:26:38Z) - Seamless Iterative Semi-Supervised Correction of Imperfect Labels in
Microscopy Images [57.42492501915773]
In-vitro tests are an alternative to animal testing for the toxicity of medical devices.
Human fatigue plays a role in error making, making the use of deep learning appealing.
We propose Seamless Iterative Semi-Supervised correction of Imperfect labels (SISSI)
Our method successfully provides an adaptive early learning correction technique for object detection.
arXiv Detail & Related papers (2022-08-05T18:52:20Z) - CellCentroidFormer: Combining Self-attention and Convolution for Cell
Detection [4.555723508665994]
We propose a novel hybrid CNN-ViT model for cell detection in microscopy images.
Our centroid-based cell detection method represents cells as ellipses and is end-to-end trainable.
arXiv Detail & Related papers (2022-06-01T09:04:39Z) - Adversarial Masking for Self-Supervised Learning [81.25999058340997]
Masked image model (MIM) framework for self-supervised learning, ADIOS, is proposed.
It simultaneously learns a masking function and an image encoder using an adversarial objective.
It consistently improves on state-of-the-art self-supervised learning (SSL) methods on a variety of tasks and datasets.
arXiv Detail & Related papers (2022-01-31T10:23:23Z) - Comparisons among different stochastic selection of activation layers
for convolutional neural networks for healthcare [77.99636165307996]
We classify biomedical images using ensembles of neural networks.
We select our activations among the following ones: ReLU, leaky ReLU, Parametric ReLU, ELU, Adaptive Piecewice Linear Unit, S-Shaped ReLU, Swish, Mish, Mexican Linear Unit, Parametric Deformable Linear Unit, Soft Root Sign.
arXiv Detail & Related papers (2020-11-24T01:53:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.