EAP4EMSIG -- Experiment Automation Pipeline for Event-Driven Microscopy to Smart Microfluidic Single-Cells Analysis
- URL: http://arxiv.org/abs/2411.05030v1
- Date: Wed, 06 Nov 2024 09:37:31 GMT
- Title: EAP4EMSIG -- Experiment Automation Pipeline for Event-Driven Microscopy to Smart Microfluidic Single-Cells Analysis
- Authors: Nils Friederich, Angelo Jovin Yamachui Sitcheu, Annika Nassal, Matthias Pesch, Erenus Yildiz, Maximilian Beichter, Lukas Scholtes, Bahar Akbaba, Thomas Lautenschlager, Oliver Neumann, Dietrich Kohlheyer, Hanno Scharr, Johannes Seiffarth, Katharina Nöh, Ralf Mikut,
- Abstract summary: We introduce the Experiment Automation Pipeline for Event-Driven Microscopy to Smart Microfluidic Single-Cells Analysis (EAP4IG)
In particular, we present initial zero-shot results from the real-time segmentation module of our approach.
Our findings indicate that among four State-Of-The- Art (SOTA) segmentation methods evaluated, Omnipose delivers the highest Panoptic Quality (PQ) score of 0.9336, while Contour Proposal Network (CPN) achieves the fastest inference time of 185 ms.
- Score: 1.8258105145031496
- License:
- Abstract: Microfluidic Live-Cell Imaging (MLCI) generates high-quality data that allows biotechnologists to study cellular growth dynamics in detail. However, obtaining these continuous data over extended periods is challenging, particularly in achieving accurate and consistent real-time event classification at the intersection of imaging and stochastic biology. To address this issue, we introduce the Experiment Automation Pipeline for Event-Driven Microscopy to Smart Microfluidic Single-Cells Analysis (EAP4EMSIG). In particular, we present initial zero-shot results from the real-time segmentation module of our approach. Our findings indicate that among four State-Of-The- Art (SOTA) segmentation methods evaluated, Omnipose delivers the highest Panoptic Quality (PQ) score of 0.9336, while Contour Proposal Network (CPN) achieves the fastest inference time of 185 ms with the second-highest PQ score of 0.8575. Furthermore, we observed that the vision foundation model Segment Anything is unsuitable for this particular use case.
Related papers
- SMILE-UHURA Challenge -- Small Vessel Segmentation at Mesoscopic Scale from Ultra-High Resolution 7T Magnetic Resonance Angiograms [60.35639972035727]
The lack of publicly available annotated datasets has impeded the development of robust, machine learning-driven segmentation algorithms.
The SMILE-UHURA challenge addresses the gap in publicly available annotated datasets by providing an annotated dataset of Time-of-Flight angiography acquired with 7T MRI.
Dice scores reached up to 0.838 $pm$ 0.066 and 0.716 $pm$ 0.125 on the respective datasets, with an average performance of up to 0.804 $pm$ 0.15.
arXiv Detail & Related papers (2024-11-14T17:06:00Z) - Tracking one-in-a-million: Large-scale benchmark for microbial single-cell tracking with experiment-aware robustness metrics [0.0]
We present the largest publicly available and annotated dataset for microbial live-cell imaging (MLCI)
This dataset contains more than 1.4 million cell instances, 29k cell tracks, and 14k cell divisions.
Our new benchmark quantifies the influence of experiment parameters on the tracking quality, and gives the opportunity to develop new data-driven methods.
arXiv Detail & Related papers (2024-11-01T13:03:51Z) - On the effectiveness of smartphone IMU sensors and Deep Learning in the detection of cardiorespiratory conditions [0.21987601456703473]
This research introduces an innovative method for the early screening of cardiorespiratory diseases based on an acquisition protocol.
We collected, in a clinical setting, a dataset featuring recordings of breathing kinematics obtained by accelerometer and gyroscope readings from five distinct body regions.
We propose an end-to-end deep learning pipeline for early cardiorespiratory disease screening, incorporating a preprocessing step segmenting the data into individual breathing cycles.
arXiv Detail & Related papers (2024-08-27T18:29:47Z) - Multi-stream Cell Segmentation with Low-level Cues for Multi-modality
Images [66.79688768141814]
We develop an automatic cell classification pipeline to label microscopy images.
We then train a classification model based on the category labels.
We deploy two types of segmentation models to segment cells with roundish and irregular shapes.
arXiv Detail & Related papers (2023-10-22T08:11:08Z) - Topologically Regularized Multiple Instance Learning to Harness Data
Scarcity [15.06687736543614]
Multiple Instance Learning models have emerged as a powerful tool to classify patients' microscopy samples.
We introduce a topological regularization term to MIL to mitigate this challenge.
We show an average enhancement of 2.8% for MIL benchmarks, 15.3% for synthetic MIL datasets, and 5.5% for real-world biomedical datasets over the current state-of-the-art.
arXiv Detail & Related papers (2023-07-26T08:14:18Z) - Advanced Multi-Microscopic Views Cell Semi-supervised Segmentation [0.0]
Deep learning (DL) shows powerful potential in cell segmentation tasks, but suffers from poor generalization.
In this paper, we introduce a novel semi-supervised cell segmentation method called Multi-Microscopic-view Cell semi-supervised (MMCS)
MMCS can train cell segmentation models utilizing less labeled multi-posture cell images with different microscopy well.
It achieves an F1-score of 0.8239 and the running time for all cases is within the time tolerance.
arXiv Detail & Related papers (2023-03-21T08:08:13Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - End-to-end LSTM based estimation of volcano event epicenter localization [55.60116686945561]
An end-to-end based LSTM scheme is proposed to address the problem of volcano event localization.
LSTM was chosen due to its capability to capture the dynamics of time varying signals.
Results show that the LSTM based architecture provided a success rate, i.e., an error smaller than 1.0Km, equal to 48.5%.
arXiv Detail & Related papers (2021-10-27T17:11:33Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.