Analysis of the performance of U-Net neural networks for the
segmentation of living cells
- URL: http://arxiv.org/abs/2210.01538v1
- Date: Tue, 4 Oct 2022 11:48:59 GMT
- Title: Analysis of the performance of U-Net neural networks for the
segmentation of living cells
- Authors: Andr\'e O. Fran\c{c}ani
- Abstract summary: This work has as goals the study of the performance of deep learning for segmenting microscopy images.
Deep learning techniques, mainly convolutional neural networks, have been applied to cell segmentation problems.
quasi-real-time image analysis was enabled, where 6.20GB of data was processed in 4 minutes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The automated analysis of microscopy images is a challenge in the context of
single-cell tracking and quantification. This work has as goals the study of
the performance of deep learning for segmenting microscopy images and the
improvement of the previously available pipeline for tracking single cells.
Deep learning techniques, mainly convolutional neural networks, have been
applied to cell segmentation problems and have shown high accuracy and fast
performance. To perform the image segmentation, an analysis of hyperparameters
was done in order to implement a convolutional neural network with U-Net
architecture. Furthermore, different models were built in order to optimize the
size of the network and the number of learnable parameters. The trained network
is then used in the pipeline that localizes the traps in a microfluidic device,
performs the image segmentation on trap images, and evaluates the fluorescence
intensity and the area of single cells over time. The tracking of the cells
during an experiment is performed by image processing algorithms, such as
centroid estimation and watershed. Finally, with all improvements in the neural
network to segment single cells and in the pipeline, quasi-real-time image
analysis was enabled, where 6.20GB of data was processed in 4 minutes.
Related papers
- Image segmentation with traveling waves in an exactly solvable recurrent
neural network [71.74150501418039]
We show that a recurrent neural network can effectively divide an image into groups according to a scene's structural characteristics.
We present a precise description of the mechanism underlying object segmentation in this network.
We then demonstrate a simple algorithm for object segmentation that generalizes across inputs ranging from simple geometric objects in grayscale images to natural images.
arXiv Detail & Related papers (2023-11-28T16:46:44Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Optimizing Neural Network Scale for ECG Classification [1.8953148404648703]
We study scaling convolutional neural networks (CNNs) specifically targeting Residual neural networks (ResNet) for analyzing electrocardiograms (ECGs)
We explored and demonstrated an efficient approach to scale ResNet by examining the effects of crucial parameters, including layer depth, the number of channels, and the convolution kernel size.
Our findings provide insight into obtaining more efficient and accurate models with fewer computing resources or less time.
arXiv Detail & Related papers (2023-08-24T01:26:31Z) - Training a spiking neural network on an event-based label-free flow
cytometry dataset [0.7742297876120561]
In this work, we combine an event-based camera with a free-space optical setup to obtain spikes for each particle passing in a microfluidic channel.
A spiking neural network is trained on the collected dataset, resulting in 97.7% mean training accuracy and 93.5% mean testing accuracy.
arXiv Detail & Related papers (2023-03-19T11:32:57Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - EfficientCellSeg: Efficient Volumetric Cell Segmentation Using Context
Aware Pseudocoloring [4.555723508665994]
We introduce a small convolutional neural network (CNN) for volumetric cell segmentation.
Our model is efficient and has an asymmetric encoder-decoder structure with very few parameters in the decoder.
Our method achieves top-ranking results, while our CNN model has an up to 25x lower number of parameters than other top-ranking methods.
arXiv Detail & Related papers (2022-04-06T18:02:15Z) - The Preliminary Results on Analysis of TAIGA-IACT Images Using
Convolutional Neural Networks [68.8204255655161]
The aim of the work is to study the possibility of the machine learning application to solve the tasks set for TAIGA-IACT.
The method of Convolutional Neural Networks (CNN) was applied to process and analyze Monte-Carlo events simulated with CORSIKA.
arXiv Detail & Related papers (2021-12-19T15:17:20Z) - CellTrack R-CNN: A Novel End-To-End Deep Neural Network for Cell
Segmentation and Tracking in Microscopy Images [21.747994390120105]
We propose a novel approach to combine cell segmentation and cell tracking into a unified end-to-end deep learning based framework.
Our method outperforms state-of-the-art algorithms in terms of both cell segmentation and cell tracking accuracies.
arXiv Detail & Related papers (2021-02-20T15:55:40Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - The efficiency of deep learning algorithms for detecting anatomical
reference points on radiological images of the head profile [55.41644538483948]
A U-Net neural network allows performing the detection of anatomical reference points more accurately than a fully convolutional neural network.
The results of the detection of anatomical reference points by the U-Net neural network are closer to the average results of the detection of reference points by a group of orthodontists.
arXiv Detail & Related papers (2020-05-25T13:51:03Z) - Cell Segmentation and Tracking using CNN-Based Distance Predictions and
a Graph-Based Matching Strategy [0.20999222360659608]
We present a method for the segmentation of touching cells in microscopy images.
By using a novel representation of cell borders, inspired by distance maps, our method is capable to utilize not only touching cells but also close cells in the training process.
This representation is notably robust to annotation errors and shows promising results for the segmentation of microscopy images containing in the training data underrepresented or not included cell types.
arXiv Detail & Related papers (2020-04-03T11:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.