3D fluorescence microscopy data synthesis for segmentation and
benchmarking
- URL: http://arxiv.org/abs/2107.10180v1
- Date: Wed, 21 Jul 2021 16:08:56 GMT
- Title: 3D fluorescence microscopy data synthesis for segmentation and
benchmarking
- Authors: Dennis Eschweiler, Malte Rethwisch, Mareike Jarchow, Simon Koppers,
Johannes Stegmaier
- Abstract summary: Conditional generative adversarial networks can be utilized to generate realistic image data for 3D fluorescence microscopy.
An additional positional conditioning of the cellular structures enables the reconstruction of position-dependent intensity characteristics.
A patch-wise working principle and a subsequent full-size reassemble strategy is used to generate image data of arbitrary size and different organisms.
- Score: 0.9922927990501083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated image processing approaches are indispensable for many biomedical
experiments and help to cope with the increasing amount of microscopy image
data in a fast and reproducible way. Especially state-of-the-art deep
learning-based approaches most often require large amounts of annotated
training data to produce accurate and generalist outputs, but they are often
compromised by the general lack of those annotated data sets. In this work, we
propose how conditional generative adversarial networks can be utilized to
generate realistic image data for 3D fluorescence microscopy from annotation
masks of 3D cellular structures. In combination with mask simulation
approaches, we demonstrate the generation of fully-annotated 3D microscopy data
sets that we make publicly available for training or benchmarking. An
additional positional conditioning of the cellular structures enables the
reconstruction of position-dependent intensity characteristics and allows to
generate image data of different quality levels. A patch-wise working principle
and a subsequent full-size reassemble strategy is used to generate image data
of arbitrary size and different organisms. We present this as a
proof-of-concept for the automated generation of fully-annotated training data
sets requiring only a minimum of manual interaction to alleviate the need of
manual annotations.
Related papers
- Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Microscopy Image Segmentation via Point and Shape Regularized Data
Synthesis [9.47802391546853]
We develop a unified pipeline for microscopy image segmentation using synthetically generated training data.
Our framework achieves comparable results to models trained on authentic microscopy images with dense labels.
arXiv Detail & Related papers (2023-08-18T22:00:53Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Optimizations of Autoencoders for Analysis and Classification of
Microscopic In Situ Hybridization Images [68.8204255655161]
We propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression.
The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network - Deep Learning Autoencoders.
arXiv Detail & Related papers (2023-04-19T13:45:28Z) - Leveraging generative adversarial networks to create realistic scanning
transmission electron microscopy images [2.5954872177280346]
Machine learning could revolutionize materials research through autonomous data collection and processing.
We employ a cycle generative adversarial network (CycleGAN) with a reciprocal space discriminator to augment simulated data with realistic spatial frequency information.
We showcase our approach by training a fully convolutional network (FCN) to identify single atom defects in a 4.5 million atom data set.
arXiv Detail & Related papers (2023-01-18T19:19:27Z) - Semi-Supervised Segmentation of Mitochondria from Electron Microscopy
Images Using Spatial Continuity [3.631638087834872]
We propose a semi-supervised deep learning model that segments mitochondria by leveraging the spatial continuity of their structural, morphological, and contextual information.
Our model achieves performance similar to that of state-of-the-art fully supervised models but requires only 20% of their annotated training data.
arXiv Detail & Related papers (2022-06-06T06:52:19Z) - Multimodal Masked Autoencoders Learn Transferable Representations [127.35955819874063]
We propose a simple and scalable network architecture, the Multimodal Masked Autoencoder (M3AE)
M3AE learns a unified encoder for both vision and language data via masked token prediction.
We provide an empirical study of M3AE trained on a large-scale image-text dataset, and find that M3AE is able to learn generalizable representations that transfer well to downstream tasks.
arXiv Detail & Related papers (2022-05-27T19:09:42Z) - Generation of microbial colonies dataset with deep learning style
transfer [0.0]
We introduce a strategy to generate a synthetic dataset of microbiological images of Petri dishes that can be used to train deep learning models.
We show that the method is able to synthesize a dataset of realistic looking images that can be used to train a neural network model capable of localising, segmenting, and classifying five different microbial species.
arXiv Detail & Related papers (2021-11-06T03:11:01Z) - Super-resolution of multiphase materials by combining complementary 2D
and 3D image data using generative adversarial networks [0.0]
We present a method for combining information from pairs of distinct but complementary imaging techniques.
Specifically, we use deep convolutional generative adversarial networks to implement super-resolution, style transfer and dimensionality expansion.
Having confidence in the accuracy of our method, we then demonstrate its power by applying to a real data pair from a lithium ion battery electrode.
arXiv Detail & Related papers (2021-10-21T17:07:57Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.