Gravitational cell detection and tracking in fluorescence microscopy
data
- URL: http://arxiv.org/abs/2312.03509v1
- Date: Wed, 6 Dec 2023 14:08:05 GMT
- Title: Gravitational cell detection and tracking in fluorescence microscopy
data
- Authors: Nikomidisz Eftimiu, Michal Kozubek
- Abstract summary: We present a novel approach based on gravitational force fields that can compete with, and potentially outperform modern machine learning models.
This method includes detection, segmentation, and tracking elements, with the results demonstrated on a Cell Tracking Challenge dataset.
- Score: 0.18828620190012021
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Automatic detection and tracking of cells in microscopy images are major
applications of computer vision technologies in both biomedical research and
clinical practice. Though machine learning methods are increasingly common in
these fields, classical algorithms still offer significant advantages for both
tasks, including better explainability, faster computation, lower hardware
requirements and more consistent performance. In this paper, we present a novel
approach based on gravitational force fields that can compete with, and
potentially outperform modern machine learning models when applied to
fluorescence microscopy images. This method includes detection, segmentation,
and tracking elements, with the results demonstrated on a Cell Tracking
Challenge dataset.
Related papers
- Masked Autoencoders for Microscopy are Scalable Learners of Cellular Biology [2.7280901660033643]
This work explores the scaling properties of weakly supervised classifiers and self-supervised masked autoencoders (MAEs)
Our results show that ViT-based MAEs outperform weakly supervised classifiers on a variety of tasks, achieving as much as a 11.5% relative improvement when recalling known biological relationships curated from public databases.
We develop a new channel-agnostic MAE architecture (CA-MAE) that allows for inputting images of different numbers and orders of channels at inference time.
arXiv Detail & Related papers (2024-04-16T02:42:06Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Physics Embedded Machine Learning for Electromagnetic Data Imaging [83.27424953663986]
Electromagnetic (EM) imaging is widely applied in sensing for security, biomedicine, geophysics, and various industries.
It is an ill-posed inverse problem whose solution is usually computationally expensive. Machine learning (ML) techniques and especially deep learning (DL) show potential in fast and accurate imaging.
This article surveys various schemes to incorporate physics in learning-based EM imaging.
arXiv Detail & Related papers (2022-07-26T02:10:15Z) - Bayesian Active Learning for Scanning Probe Microscopy: from Gaussian
Processes to Hypothesis Learning [0.0]
We discuss the basic principles of Bayesian active learning and illustrate its applications for scanning probe microscopes (SPMs)
These frameworks allow for the use of prior data, the discovery of specific functionalities as encoded in spectral data, and exploration of physical laws manifesting during the experiment.
arXiv Detail & Related papers (2022-05-30T23:01:41Z) - Learning multi-scale functional representations of proteins from
single-cell microscopy data [77.34726150561087]
We show that simple convolutional networks trained on localization classification can learn protein representations that encapsulate diverse functional information.
We also propose a robust evaluation strategy to assess quality of protein representations across different scales of biological function.
arXiv Detail & Related papers (2022-05-24T00:00:07Z) - Increasing a microscope's effective field of view via overlapped imaging
and machine learning [4.23935174235373]
This work demonstrates a multi-lens microscopic imaging system that overlaps multiple independent fields of view on a single sensor for high-efficiency automated specimen analysis.
arXiv Detail & Related papers (2021-10-10T22:52:36Z) - Ultrafast Focus Detection for Automated Microscopy [0.0]
We present a fast out-of-focus detection algorithm for electron microscopy images collected serially.
Our technique, Multi-scale Histologic Feature Detection, adapts classical computer vision techniques and is based on detecting various fine-grained histologic features.
Tests are performed that demonstrate near-real-time detection of out-of-focus conditions.
arXiv Detail & Related papers (2021-08-26T22:24:41Z) - Smart mobile microscopy: towards fully-automated digitization [0.0]
We present a smart'' mobile microscope concept aimed at automatic digitization of the most valuable visual information about the specimen.
We perform this through combining automated microscope setup control and classic techniques such as auto-focusing, in-focus filtering, and focus-stacking.
arXiv Detail & Related papers (2021-05-24T09:55:29Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.