Cell Segmentation and Tracking using CNN-Based Distance Predictions and
a Graph-Based Matching Strategy
- URL: http://arxiv.org/abs/2004.01486v4
- Date: Thu, 22 Oct 2020 14:51:01 GMT
- Title: Cell Segmentation and Tracking using CNN-Based Distance Predictions and
a Graph-Based Matching Strategy
- Authors: Tim Scherr, Katharina L\"offler, Moritz B\"ohland, Ralf Mikut
- Abstract summary: We present a method for the segmentation of touching cells in microscopy images.
By using a novel representation of cell borders, inspired by distance maps, our method is capable to utilize not only touching cells but also close cells in the training process.
This representation is notably robust to annotation errors and shows promising results for the segmentation of microscopy images containing in the training data underrepresented or not included cell types.
- Score: 0.20999222360659608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The accurate segmentation and tracking of cells in microscopy image sequences
is an important task in biomedical research, e.g., for studying the development
of tissues, organs or entire organisms. However, the segmentation of touching
cells in images with a low signal-to-noise-ratio is still a challenging
problem. In this paper, we present a method for the segmentation of touching
cells in microscopy images. By using a novel representation of cell borders,
inspired by distance maps, our method is capable to utilize not only touching
cells but also close cells in the training process. Furthermore, this
representation is notably robust to annotation errors and shows promising
results for the segmentation of microscopy images containing in the training
data underrepresented or not included cell types. For the prediction of the
proposed neighbor distances, an adapted U-Net convolutional neural network
(CNN) with two decoder paths is used. In addition, we adapt a graph-based cell
tracking algorithm to evaluate our proposed method on the task of cell
tracking. The adapted tracking algorithm includes a movement estimation in the
cost function to re-link tracks with missing segmentation masks over a short
sequence of frames. Our combined tracking by detection method has proven its
potential in the IEEE ISBI 2020 Cell Tracking Challenge
(http://celltrackingchallenge.net/) where we achieved as team KIT-Sch-GE
multiple top three rankings including two top performances using a single
segmentation model for the diverse data sets.
Related papers
- Cell as Point: One-Stage Framework for Efficient Cell Tracking [54.19259129722988]
This paper proposes the novel end-to-end CAP framework to achieve efficient and stable cell tracking in one stage.
CAP abandons detection or segmentation stages and simplifies the process by exploiting the correlation among the trajectories of cell points to track cells jointly.
Cap demonstrates strong cell tracking performance while also being 10 to 55 times more efficient than existing methods.
arXiv Detail & Related papers (2024-11-22T10:16:35Z) - Trackastra: Transformer-based cell tracking for live-cell microscopy [0.0]
Trackastra is a general purpose cell tracking approach that uses a simple transformer architecture to learn pairwise associations of cells.
We show that our tracking approach performs on par with or better than highly tuned state-of-the-art cell tracking algorithms.
arXiv Detail & Related papers (2024-05-24T16:44:22Z) - Cell Graph Transformer for Nuclei Classification [78.47566396839628]
We develop a cell graph transformer (CGT) that treats nodes and edges as input tokens to enable learnable adjacency and information exchange among all nodes.
Poorly features can lead to noisy self-attention scores and inferior convergence.
We propose a novel topology-aware pretraining method that leverages a graph convolutional network (GCN) to learn a feature extractor.
arXiv Detail & Related papers (2024-02-20T12:01:30Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - EfficientCellSeg: Efficient Volumetric Cell Segmentation Using Context
Aware Pseudocoloring [4.555723508665994]
We introduce a small convolutional neural network (CNN) for volumetric cell segmentation.
Our model is efficient and has an asymmetric encoder-decoder structure with very few parameters in the decoder.
Our method achieves top-ranking results, while our CNN model has an up to 25x lower number of parameters than other top-ranking methods.
arXiv Detail & Related papers (2022-04-06T18:02:15Z) - Graph Neural Network for Cell Tracking in Microscopy Videos [0.0]
We present a novel graph neural network (GNN) approach for cell tracking in microscopy videos.
By modeling the entire time-lapse sequence as a direct graph, we extract the entire set of cell trajectories.
We exploit a deep metric learning algorithm to extract cell feature vectors that distinguish between instances of different biological cells.
arXiv Detail & Related papers (2022-02-09T21:21:48Z) - CellTrack R-CNN: A Novel End-To-End Deep Neural Network for Cell
Segmentation and Tracking in Microscopy Images [21.747994390120105]
We propose a novel approach to combine cell segmentation and cell tracking into a unified end-to-end deep learning based framework.
Our method outperforms state-of-the-art algorithms in terms of both cell segmentation and cell tracking accuracies.
arXiv Detail & Related papers (2021-02-20T15:55:40Z) - AttentionNAS: Spatiotemporal Attention Cell Search for Video
Classification [86.64702967379709]
We propose a novel search space fortemporal attention cells, which allows the search algorithm to flexibly explore various design choices in the cell.
The discovered attention cells can be seamlessly inserted into existing backbone networks, e.g., I3D or S3D, and improve video accuracy by more than 2% on both Kinetics-600 and MiT datasets.
arXiv Detail & Related papers (2020-07-23T14:30:05Z) - Split and Expand: An inference-time improvement for Weakly Supervised
Cell Instance Segmentation [71.50526869670716]
We propose a two-step post-processing procedure, Split and Expand, to improve the conversion of segmentation maps to instances.
In the Split step, we split clumps of cells from the segmentation map into individual cell instances with the guidance of cell-center predictions.
In the Expand step, we find missing small cells using the cell-center predictions.
arXiv Detail & Related papers (2020-07-21T14:05:09Z) - Learning to segment clustered amoeboid cells from brightfield microscopy
via multi-task learning with adaptive weight selection [6.836162272841265]
We introduce a novel supervised technique for cell segmentation in a multi-task learning paradigm.
A combination of a multi-task loss, based on the region and cell boundary detection, is employed for an improved prediction efficiency of the network.
We observe an overall Dice score of 0.93 on the validation set, which is an improvement of over 15.9% on a recent unsupervised method, and outperforms the popular supervised U-net algorithm by at least $5.8%$ on average.
arXiv Detail & Related papers (2020-05-19T11:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.