Enhancing Cell Tracking with a Time-Symmetric Deep Learning Approach
- URL: http://arxiv.org/abs/2308.03887v3
- Date: Tue, 3 Sep 2024 08:53:32 GMT
- Title: Enhancing Cell Tracking with a Time-Symmetric Deep Learning Approach
- Authors: Gergely Szabó, Paolo Bonaiuti, Andrea Ciliberto, András Horváth,
- Abstract summary: We develop a new deep-learning based tracking method that relies solely on the assumption that cells can be tracked based on theirtemporal neighborhood.
The proposed method has the additional benefit that the motion patterns of the cells can be learned completely by the predictor without any prior assumptions.
- Score: 0.34089646689382486
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The accurate tracking of live cells using video microscopy recordings remains a challenging task for popular state-of-the-art image processing based object tracking methods. In recent years, several existing and new applications have attempted to integrate deep-learning based frameworks for this task, but most of them still heavily rely on consecutive frame based tracking embedded in their architecture or other premises that hinder generalized learning. To address this issue, we aimed to develop a new deep-learning based tracking method that relies solely on the assumption that cells can be tracked based on their spatio-temporal neighborhood, without restricting it to consecutive frames. The proposed method has the additional benefit that the motion patterns of the cells can be learned completely by the predictor without any prior assumptions, and it has the potential to handle a large number of video frames with heavy artifacts. The efficacy of the proposed method is demonstrated through biologically motivated validation strategies and compared against multiple state-of-the-art cell tracking methods.
Related papers
- Cell as Point: One-Stage Framework for Efficient Cell Tracking [54.19259129722988]
This paper proposes the novel end-to-end CAP framework to achieve efficient and stable cell tracking in one stage.
CAP abandons detection or segmentation stages and simplifies the process by exploiting the correlation among the trajectories of cell points to track cells jointly.
Cap demonstrates strong cell tracking performance while also being 10 to 55 times more efficient than existing methods.
arXiv Detail & Related papers (2024-11-22T10:16:35Z) - Cyclic Refiner: Object-Aware Temporal Representation Learning for Multi-View 3D Detection and Tracking [37.186306646752975]
We propose a unified object-aware temporal learning framework for multi-view 3D detection and tracking tasks.
The proposed model achieves consistent performance gains over baselines of different designs.
arXiv Detail & Related papers (2024-07-03T16:10:19Z) - Deep Temporal Sequence Classification and Mathematical Modeling for Cell Tracking in Dense 3D Microscopy Videos of Bacterial Biofilms [18.563062576080704]
We introduce a novel cell tracking algorithm named DenseTrack.
DenseTrack integrates deep learning with mathematical model-based strategies to establish correspondences between consecutive frames.
We present an eigendecomposition-based cell division detection strategy.
arXiv Detail & Related papers (2024-06-27T23:26:57Z) - Trackastra: Transformer-based cell tracking for live-cell microscopy [0.0]
Trackastra is a general purpose cell tracking approach that uses a simple transformer architecture to learn pairwise associations of cells.
We show that our tracking approach performs on par with or better than highly tuned state-of-the-art cell tracking algorithms.
arXiv Detail & Related papers (2024-05-24T16:44:22Z) - LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry [52.131996528655094]
We present the Long-term Effective Any Point Tracking (LEAP) module.
LEAP innovatively combines visual, inter-track, and temporal cues with mindfully selected anchors for dynamic track estimation.
Based on these traits, we develop LEAP-VO, a robust visual odometry system adept at handling occlusions and dynamic scenes.
arXiv Detail & Related papers (2024-01-03T18:57:27Z) - Cell Tracking-by-detection using Elliptical Bounding Boxes [0.0]
This work proposes a new approach based on the classical tracking-by-detection paradigm.
It approximates the cell shapes as oriented ellipses and then uses generic-purpose oriented object detectors to identify the cells in each frame.
Our results show that our method can achieve detection and tracking results competitively with state-of-the-art techniques.
arXiv Detail & Related papers (2023-10-07T18:47:17Z) - Modeling Continuous Motion for 3D Point Cloud Object Tracking [54.48716096286417]
This paper presents a novel approach that views each tracklet as a continuous stream.
At each timestamp, only the current frame is fed into the network to interact with multi-frame historical features stored in a memory bank.
To enhance the utilization of multi-frame features for robust tracking, a contrastive sequence enhancement strategy is proposed.
arXiv Detail & Related papers (2023-03-14T02:58:27Z) - Crop-Transform-Paste: Self-Supervised Learning for Visual Tracking [137.26381337333552]
In this work, we develop the Crop-Transform-Paste operation, which is able to synthesize sufficient training data.
Since the object state is known in all synthesized data, existing deep trackers can be trained in routine ways without human annotation.
arXiv Detail & Related papers (2021-06-21T07:40:34Z) - Deep Keypoint-Based Camera Pose Estimation with Geometric Constraints [80.60538408386016]
Estimating relative camera poses from consecutive frames is a fundamental problem in visual odometry.
We propose an end-to-end trainable framework consisting of learnable modules for detection, feature extraction, matching and outlier rejection.
arXiv Detail & Related papers (2020-07-29T21:41:31Z) - Self-supervised Video Object Segmentation [76.83567326586162]
The objective of this paper is self-supervised representation learning, with the goal of solving semi-supervised video object segmentation (a.k.a. dense tracking)
We make the following contributions: (i) we propose to improve the existing self-supervised approach, with a simple, yet more effective memory mechanism for long-term correspondence matching; (ii) by augmenting the self-supervised approach with an online adaptation module, our method successfully alleviates tracker drifts caused by spatial-temporal discontinuity; (iv) we demonstrate state-of-the-art results among the self-supervised approaches on DAVIS-2017 and YouTube
arXiv Detail & Related papers (2020-06-22T17:55:59Z) - RetinaTrack: Online Single Stage Joint Detection and Tracking [22.351109024452462]
We focus on the tracking-by-detection paradigm for autonomous driving where both tasks are mission critical.
We propose a conceptually simple and efficient joint model of detection and tracking, called RetinaTrack, which modifies the popular single stage RetinaNet approach.
arXiv Detail & Related papers (2020-03-30T23:46:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.