Semi-Supervised Segmentation of Mitochondria from Electron Microscopy
Images Using Spatial Continuity
- URL: http://arxiv.org/abs/2206.02392v1
- Date: Mon, 6 Jun 2022 06:52:19 GMT
- Title: Semi-Supervised Segmentation of Mitochondria from Electron Microscopy
Images Using Spatial Continuity
- Authors: Yunpeng Xiao, Youpeng Zhao and Ge Yang
- Abstract summary: We propose a semi-supervised deep learning model that segments mitochondria by leveraging the spatial continuity of their structural, morphological, and contextual information.
Our model achieves performance similar to that of state-of-the-art fully supervised models but requires only 20% of their annotated training data.
- Score: 3.631638087834872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Morphology of mitochondria plays critical roles in mediating their
physiological functions. Accurate segmentation of mitochondria from 3D electron
microscopy (EM) images is essential to quantitative characterization of their
morphology at the nanometer scale. Fully supervised deep learning models
developed for this task achieve excellent performance but require substantial
amounts of annotated data for training. However, manual annotation of EM images
is laborious and time-consuming because of their large volumes, limited
contrast, and low signal-to-noise ratios (SNRs). To overcome this challenge, we
propose a semi-supervised deep learning model that segments mitochondria by
leveraging the spatial continuity of their structural, morphological, and
contextual information in both labeled and unlabeled images. We use random
piecewise affine transformation to synthesize comprehensive and realistic
mitochondrial morphology for augmentation of training data. Experiments on the
EPFL dataset show that our model achieves performance similar as that of
state-of-the-art fully supervised models but requires only ~20% of their
annotated training data. Our semi-supervised model is versatile and can also
accurately segment other spatially continuous structures from EM images. Data
and code of this study are openly accessible at
https://github.com/cbmi-group/MPP.
Related papers
- ShapeMamba-EM: Fine-Tuning Foundation Model with Local Shape Descriptors and Mamba Blocks for 3D EM Image Segmentation [49.42525661521625]
This paper presents ShapeMamba-EM, a specialized fine-tuning method for 3D EM segmentation.
It is tested over a wide range of EM images, covering five segmentation tasks and 10 datasets.
arXiv Detail & Related papers (2024-08-26T08:59:22Z) - Retinal OCT Synthesis with Denoising Diffusion Probabilistic Models for
Layer Segmentation [2.4113205575263708]
We propose an image synthesis method that utilizes denoising diffusion probabilistic models (DDPMs) to automatically generate retinal optical coherence tomography ( OCT) images.
We observe a consistent improvement in layer segmentation accuracy, which is validated using various neural networks.
These findings demonstrate the promising potential of DDPMs in reducing the need for manual annotations of retinal OCT images.
arXiv Detail & Related papers (2023-11-09T16:09:24Z) - Learning Multiscale Consistency for Self-supervised Electron Microscopy
Instance Segmentation [48.267001230607306]
We propose a pretraining framework that enhances multiscale consistency in EM volumes.
Our approach leverages a Siamese network architecture, integrating strong and weak data augmentations.
It effectively captures voxel and feature consistency, showing promise for learning transferable representations for EM analysis.
arXiv Detail & Related papers (2023-08-19T05:49:13Z) - AnyStar: Domain randomized universal star-convex 3D instance
segmentation [8.670653580154895]
We present AnyStar, a domain-randomized generative model that simulates synthetic data, orientation, and bloblike objects with randomized appearance.
As a result, networks using our generative model do not require annotated images from unseen datasets.
arXiv Detail & Related papers (2023-07-13T20:01:26Z) - Optimizations of Autoencoders for Analysis and Classification of
Microscopic In Situ Hybridization Images [68.8204255655161]
We propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression.
The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network - Deep Learning Autoencoders.
arXiv Detail & Related papers (2023-04-19T13:45:28Z) - 3D Mitochondria Instance Segmentation with Spatio-Temporal Transformers [101.44668514239959]
We propose a hybrid encoder-decoder framework that efficiently computes spatial and temporal attentions in parallel.
We also introduce a semantic clutter-background adversarial loss during training that aids in the region of mitochondria instances from the background.
arXiv Detail & Related papers (2023-03-21T17:58:49Z) - 3D fluorescence microscopy data synthesis for segmentation and
benchmarking [0.9922927990501083]
Conditional generative adversarial networks can be utilized to generate realistic image data for 3D fluorescence microscopy.
An additional positional conditioning of the cellular structures enables the reconstruction of position-dependent intensity characteristics.
A patch-wise working principle and a subsequent full-size reassemble strategy is used to generate image data of arbitrary size and different organisms.
arXiv Detail & Related papers (2021-07-21T16:08:56Z) - Enforcing Morphological Information in Fully Convolutional Networks to
Improve Cell Instance Segmentation in Fluorescence Microscopy Images [1.408123603417833]
We propose a novel cell instance segmentation approach based on the well-known U-Net architecture.
To enforce the learning of morphological information per pixel, a deep distance transformer (DDT) acts as a back-bone model.
The obtained results suggest a performance boost over traditional U-Net architectures.
arXiv Detail & Related papers (2021-06-10T15:54:38Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Deep Low-Shot Learning for Biological Image Classification and
Visualization from Limited Training Samples [52.549928980694695]
In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
arXiv Detail & Related papers (2020-10-20T06:06:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.