DropTrack -- automatic droplet tracking using deep learning for
microfluidic applications
- URL: http://arxiv.org/abs/2205.02568v1
- Date: Thu, 5 May 2022 11:03:32 GMT
- Title: DropTrack -- automatic droplet tracking using deep learning for
microfluidic applications
- Authors: Mihir Durve, Adriano Tiribocchi, Fabio Bonaccorso, Andrea Montessori,
Marco Lauricella, Michal Bogdan, Jan Guzowski, Sauro Succi
- Abstract summary: One fundamental analysis frequently desired in microfluidic experiments is counting and tracking the droplets.
Here, two deep learning-based algorithms for object detection (YOLO) and object tracking (DeepSORT) are combined into a single image analysis tool, DropTrack.
DropTrack analyzes input videos, extracts droplets' trajectories, and infers other observables of interest, such as droplet numbers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks are rapidly emerging as data analysis tools, often
outperforming the conventional techniques used in complex microfluidic systems.
One fundamental analysis frequently desired in microfluidic experiments is
counting and tracking the droplets. Specifically, droplet tracking in dense
emulsions is challenging as droplets move in tightly packed configurations.
Sometimes the individual droplets in these dense clusters are hard to resolve,
even for a human observer. Here, two deep learning-based cutting-edge
algorithms for object detection (YOLO) and object tracking (DeepSORT) are
combined into a single image analysis tool, DropTrack, to track droplets in
microfluidic experiments. DropTrack analyzes input videos, extracts droplets'
trajectories, and infers other observables of interest, such as droplet
numbers. Training an object detector network for droplet recognition with
manually annotated images is a labor-intensive task and a persistent
bottleneck. This work partly resolves this problem by training object detector
networks (YOLOv5) with hybrid datasets containing real and synthetic images. We
present an analysis of a double emulsion experiment as a case study to measure
DropTrack's performance. For our test case, the YOLO networks trained with 60%
synthetic images show similar performance in droplet counting as with the one
trained using 100% real images, meanwhile saving the image annotation work by
60%. DropTrack's performance is measured in terms of mean average precision
(mAP), mean square error in counting the droplets, and inference speed. The
fastest configuration of DropTrack runs inference at about 30 frames per
second, well within the standards for real-time image analysis.
Related papers
- Temporal Correlation Meets Embedding: Towards a 2nd Generation of JDE-based Real-Time Multi-Object Tracking [52.04679257903805]
Joint Detection and Embedding (JDE) trackers have demonstrated excellent performance in Multi-Object Tracking (MOT) tasks.
Our tracker, named TCBTrack, achieves state-of-the-art performance on multiple public benchmarks.
arXiv Detail & Related papers (2024-07-19T07:48:45Z) - Exploring Dynamic Transformer for Efficient Object Tracking [58.120191254379854]
We propose DyTrack, a dynamic transformer framework for efficient tracking.
DyTrack automatically learns to configure proper reasoning routes for various inputs, gaining better utilization of the available computational budget.
Experiments on multiple benchmarks demonstrate that DyTrack achieves promising speed-precision trade-offs with only a single model.
arXiv Detail & Related papers (2024-03-26T12:31:58Z) - MEMTRACK: A Deep Learning-Based Approach to Microrobot Tracking in Dense
and Low-Contrast Environments [4.638136711579875]
Motion Enhanced Multi-level Tracker (MEMTrack) is a robust pipeline for detecting and tracking microrobots.
We trained and validated our model using bacterial micro-motors in collagen (tissue phantom) and tested it in collagen and aqueous media.
MEMTrack can quantify average bacteria speed with no statistically significant difference from the laboriously-produced manual tracking data.
arXiv Detail & Related papers (2023-10-13T23:21:32Z) - ReST: A Reconfigurable Spatial-Temporal Graph Model for Multi-Camera
Multi-Object Tracking [11.619493960418176]
Multi-Camera Multi-Object Tracking (MC-MOT) utilizes information from multiple views to better handle problems with occlusion and crowded scenes.
Current graph-based methods do not effectively utilize information regarding spatial and temporal consistency.
We propose a novel reconfigurable graph model that first associates all detected objects across cameras spatially before reconfiguring it into a temporal graph.
arXiv Detail & Related papers (2023-08-25T08:02:04Z) - DropMAE: Learning Representations via Masked Autoencoders with Spatial-Attention Dropout for Temporal Matching Tasks [77.84636815364905]
This paper studies masked autoencoder (MAE) video pre-training for various temporal matching-based downstream tasks.
We propose DropMAE, which adaptively performs spatial-attention dropout in the frame reconstruction to facilitate temporal correspondence learning in videos.
arXiv Detail & Related papers (2023-04-02T16:40:42Z) - Benchmarking YOLOv5 and YOLOv7 models with DeepSORT for droplet tracking
applications [0.0]
This work is a benchmark study for the YOLOv5 and YOLOv7 networks with DeepSORT in terms of the training time and inference time for a custom dataset of microfluidic droplets.
We compare the performance of the droplet tracking applications with YOLOv5 and YOLOv7 in terms of training time and time to analyze a given video across various hardware configurations.
arXiv Detail & Related papers (2023-01-19T17:37:40Z) - QDTrack: Quasi-Dense Similarity Learning for Appearance-Only Multiple
Object Tracking [73.52284039530261]
We present Quasi-Dense Similarity Learning, which densely samples hundreds of object regions on a pair of images for contrastive learning.
We find that the resulting distinctive feature space admits a simple nearest neighbor search at inference time for object association.
We show that our similarity learning scheme is not limited to video data, but can learn effective instance similarity even from static input.
arXiv Detail & Related papers (2022-10-12T15:47:36Z) - A Bayesian Detect to Track System for Robust Visual Object Tracking and
Semi-Supervised Model Learning [1.7268829007643391]
We ad-dress problems in a Bayesian tracking and detection framework parameterized by neural network outputs.
We propose a particle filter-based approximate sampling algorithm for tracking object state estimation.
Based on our particle filter inference algorithm, a semi-supervised learn-ing algorithm is utilized for learning tracking network on intermittent labeled frames.
arXiv Detail & Related papers (2022-05-05T00:18:57Z) - Finding a Needle in a Haystack: Tiny Flying Object Detection in 4K
Videos using a Joint Detection-and-Tracking Approach [19.59528430884104]
We present a neural network model called the Recurrent Correlational Network, where detection and tracking are jointly performed.
In experiments with datasets containing images of scenes with small flying objects, such as birds and unmanned aerial vehicles, the proposed method yielded consistent improvements.
Our network performs as well as state-of-the-art generic object trackers when it was evaluated as a tracker on a bird image dataset.
arXiv Detail & Related papers (2021-05-18T03:22:03Z) - Tracking-by-Counting: Using Network Flows on Crowd Density Maps for
Tracking Multiple Targets [96.98888948518815]
State-of-the-art multi-object tracking(MOT) methods follow the tracking-by-detection paradigm.
We propose a new MOT paradigm, tracking-by-counting, tailored for crowded scenes.
arXiv Detail & Related papers (2020-07-18T19:51:53Z) - Cascaded Regression Tracking: Towards Online Hard Distractor
Discrimination [202.2562153608092]
We propose a cascaded regression tracker with two sequential stages.
In the first stage, we filter out abundant easily-identified negative candidates.
In the second stage, a discrete sampling based ridge regression is designed to double-check the remaining ambiguous hard samples.
arXiv Detail & Related papers (2020-06-18T07:48:01Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.