3D fluorescence microscopy data synthesis for segmentation and
benchmarking
- URL: http://arxiv.org/abs/2107.10180v1
- Date: Wed, 21 Jul 2021 16:08:56 GMT
- Title: 3D fluorescence microscopy data synthesis for segmentation and
benchmarking
- Authors: Dennis Eschweiler, Malte Rethwisch, Mareike Jarchow, Simon Koppers,
Johannes Stegmaier
- Abstract summary: Conditional generative adversarial networks can be utilized to generate realistic image data for 3D fluorescence microscopy.
An additional positional conditioning of the cellular structures enables the reconstruction of position-dependent intensity characteristics.
A patch-wise working principle and a subsequent full-size reassemble strategy is used to generate image data of arbitrary size and different organisms.
- Score: 0.9922927990501083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated image processing approaches are indispensable for many biomedical
experiments and help to cope with the increasing amount of microscopy image
data in a fast and reproducible way. Especially state-of-the-art deep
learning-based approaches most often require large amounts of annotated
training data to produce accurate and generalist outputs, but they are often
compromised by the general lack of those annotated data sets. In this work, we
propose how conditional generative adversarial networks can be utilized to
generate realistic image data for 3D fluorescence microscopy from annotation
masks of 3D cellular structures. In combination with mask simulation
approaches, we demonstrate the generation of fully-annotated 3D microscopy data
sets that we make publicly available for training or benchmarking. An
additional positional conditioning of the cellular structures enables the
reconstruction of position-dependent intensity characteristics and allows to
generate image data of different quality levels. A patch-wise working principle
and a subsequent full-size reassemble strategy is used to generate image data
of arbitrary size and different organisms. We present this as a
proof-of-concept for the automated generation of fully-annotated training data
sets requiring only a minimum of manual interaction to alleviate the need of
manual annotations.
Related papers
- Learning General-Purpose Biomedical Volume Representations using Randomized Synthesis [9.355513913682794]
Current biomedical foundation models struggle to generalize as public 3D datasets are small.
We propose a data engine that synthesizes highly variable training samples that enable generalization to new biomedical contexts.
To then train a single 3D network for any voxel-level task, we develop a contrastive learning method that pretrains the network to be stable against nuisance imaging variation simulated by the data engine.
arXiv Detail & Related papers (2024-11-04T18:40:46Z) - Large Spatial Model: End-to-end Unposed Images to Semantic 3D [79.94479633598102]
Large Spatial Model (LSM) processes unposed RGB images directly into semantic radiance fields.
LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward operation.
It can generate versatile label maps by interacting with language at novel viewpoints.
arXiv Detail & Related papers (2024-10-24T17:54:42Z) - DiffKillR: Killing and Recreating Diffeomorphisms for Cell Annotation in Dense Microscopy Images [105.46086313858062]
We introduce DiffKillR, a novel framework that reframes cell annotation as the combination of archetype matching and image registration tasks.
We will discuss the theoretical properties of DiffKillR and validate it on three microscopy tasks, demonstrating its advantages over existing supervised, semi-supervised, and unsupervised methods.
arXiv Detail & Related papers (2024-10-04T00:38:29Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Microscopy Image Segmentation via Point and Shape Regularized Data
Synthesis [9.47802391546853]
We develop a unified pipeline for microscopy image segmentation using synthetically generated training data.
Our framework achieves comparable results to models trained on authentic microscopy images with dense labels.
arXiv Detail & Related papers (2023-08-18T22:00:53Z) - Optimizations of Autoencoders for Analysis and Classification of
Microscopic In Situ Hybridization Images [68.8204255655161]
We propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression.
The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network - Deep Learning Autoencoders.
arXiv Detail & Related papers (2023-04-19T13:45:28Z) - Leveraging generative adversarial networks to create realistic scanning
transmission electron microscopy images [2.5954872177280346]
Machine learning could revolutionize materials research through autonomous data collection and processing.
We employ a cycle generative adversarial network (CycleGAN) with a reciprocal space discriminator to augment simulated data with realistic spatial frequency information.
We showcase our approach by training a fully convolutional network (FCN) to identify single atom defects in a 4.5 million atom data set.
arXiv Detail & Related papers (2023-01-18T19:19:27Z) - Semi-Supervised Segmentation of Mitochondria from Electron Microscopy
Images Using Spatial Continuity [3.631638087834872]
We propose a semi-supervised deep learning model that segments mitochondria by leveraging the spatial continuity of their structural, morphological, and contextual information.
Our model achieves performance similar to that of state-of-the-art fully supervised models but requires only 20% of their annotated training data.
arXiv Detail & Related papers (2022-06-06T06:52:19Z) - Super-resolution of multiphase materials by combining complementary 2D
and 3D image data using generative adversarial networks [0.0]
We present a method for combining information from pairs of distinct but complementary imaging techniques.
Specifically, we use deep convolutional generative adversarial networks to implement super-resolution, style transfer and dimensionality expansion.
Having confidence in the accuracy of our method, we then demonstrate its power by applying to a real data pair from a lithium ion battery electrode.
arXiv Detail & Related papers (2021-10-21T17:07:57Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.