Benchmarking YOLOv5 and YOLOv7 models with DeepSORT for droplet tracking
applications
- URL: http://arxiv.org/abs/2301.08189v1
- Date: Thu, 19 Jan 2023 17:37:40 GMT
- Title: Benchmarking YOLOv5 and YOLOv7 models with DeepSORT for droplet tracking
applications
- Authors: Mihir Durve, Sibilla Orsini, Adriano Tiribocchi, Andrea Montessori,
Jean-Michel Tucny, Marco Lauricella, Andrea Camposeo, Dario Pisignano, and
Sauro Succi
- Abstract summary: This work is a benchmark study for the YOLOv5 and YOLOv7 networks with DeepSORT in terms of the training time and inference time for a custom dataset of microfluidic droplets.
We compare the performance of the droplet tracking applications with YOLOv5 and YOLOv7 in terms of training time and time to analyze a given video across various hardware configurations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tracking droplets in microfluidics is a challenging task. The difficulty
arises in choosing a tool to analyze general microfluidic videos to infer
physical quantities. The state-of-the-art object detector algorithm You Only
Look Once (YOLO) and the object tracking algorithm Simple Online and Realtime
Tracking with a Deep Association Metric (DeepSORT) are customizable for droplet
identification and tracking. The customization includes training YOLO and
DeepSORT networks to identify and track the objects of interest. We trained
several YOLOv5 and YOLOv7 models and the DeepSORT network for droplet
identification and tracking from microfluidic experimental videos. We compare
the performance of the droplet tracking applications with YOLOv5 and YOLOv7 in
terms of training time and time to analyze a given video across various
hardware configurations. Despite the latest YOLOv7 being 10% faster, the
real-time tracking is only achieved by lighter YOLO models on RTX 3070 Ti GPU
machine due to additional significant droplet tracking costs arising from the
DeepSORT algorithm. This work is a benchmark study for the YOLOv5 and YOLOv7
networks with DeepSORT in terms of the training time and inference time for a
custom dataset of microfluidic droplets.
Related papers
- YOLOE: Real-Time Seeing Anything [64.35836518093342]
YOLOE integrates detection and segmentation across diverse open prompt mechanisms within a single highly efficient model.
YOLOE's exceptional zero-shot performance and transferability with high inference efficiency and low training cost.
arXiv Detail & Related papers (2025-03-10T15:42:59Z) - YOLO Evolution: A Comprehensive Benchmark and Architectural Review of YOLOv12, YOLO11, and Their Previous Versions [0.0]
This study represents the first comprehensive experimental evaluation of YOLOv3 to the latest version, YOLOv12.
The challenges considered include varying object sizes, diverse aspect ratios, and small-sized objects of a single class.
Our analysis highlights the distinctive strengths and limitations of each YOLO version.
arXiv Detail & Related papers (2024-10-31T20:45:00Z) - YOLOv5, YOLOv8 and YOLOv10: The Go-To Detectors for Real-time Vision [0.6662800021628277]
This paper focuses on the evolution of the YOLO (You Only Look Once) object detection algorithm, focusing on YOLOv5, YOLOv8, and YOLOv10.
We analyze the architectural advancements, performance improvements, and suitability for edge deployment across these versions.
arXiv Detail & Related papers (2024-07-03T10:40:20Z) - YOLOv10: Real-Time End-to-End Object Detection [68.28699631793967]
YOLOs have emerged as the predominant paradigm in the field of real-time object detection.
The reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs.
We introduce the holistic efficiency-accuracy driven model design strategy for YOLOs.
arXiv Detail & Related papers (2024-05-23T11:44:29Z) - YOLO-World: Real-Time Open-Vocabulary Object Detection [87.08732047660058]
We introduce YOLO-World, an innovative approach that enhances YOLO with open-vocabulary detection capabilities.
Our method excels in detecting a wide range of objects in a zero-shot manner with high efficiency.
YOLO-World achieves 35.4 AP with 52.0 FPS on V100, which outperforms many state-of-the-art methods in terms of both accuracy and speed.
arXiv Detail & Related papers (2024-01-30T18:59:38Z) - Investigating YOLO Models Towards Outdoor Obstacle Detection For
Visually Impaired People [3.4628430044380973]
Seven different YOLO object detection models were implemented.
YOLOv8 was found to be the best model, which reached a precision of $80%$ and a recall of $68.2%$ on a well-known Obstacle dataset.
YOLO-NAS was found to be suboptimal for the obstacle detection task.
arXiv Detail & Related papers (2023-12-10T13:16:22Z) - BEVTrack: A Simple and Strong Baseline for 3D Single Object Tracking in Bird's-Eye View [56.77287041917277]
3D Single Object Tracking (SOT) is a fundamental task of computer vision, proving essential for applications like autonomous driving.
In this paper, we propose BEVTrack, a simple yet effective baseline method.
By estimating the target motion in Bird's-Eye View (BEV) to perform tracking, BEVTrack demonstrates surprising simplicity from various aspects, i.e., network designs, training objectives, and tracking pipeline, while achieving superior performance.
arXiv Detail & Related papers (2023-09-05T12:42:26Z) - SATAY: A Streaming Architecture Toolflow for Accelerating YOLO Models on
FPGA Devices [48.47320494918925]
This work tackles the challenges of deploying stateof-the-art object detection models onto FPGA devices for ultralow latency applications.
We employ a streaming architecture design for our YOLO accelerators, implementing the complete model on-chip in a deeply pipelined fashion.
We introduce novel hardware components to support the operations of YOLO models in a dataflow manner, and off-chip memory buffering to address the limited on-chip memory resources.
arXiv Detail & Related papers (2023-09-04T13:15:01Z) - YOLO-MS: Rethinking Multi-Scale Representation Learning for Real-time
Object Detection [80.11152626362109]
We provide an efficient and performant object detector, termed YOLO-MS.
We train our YOLO-MS on the MS COCO dataset from scratch without relying on any other large-scale datasets.
Our work can also be used as a plug-and-play module for other YOLO models.
arXiv Detail & Related papers (2023-08-10T10:12:27Z) - Video object tracking based on YOLOv7 and DeepSORT [7.651368436751519]
We propose YOLOv7 as the object detection part to the DeepSORT, and propose YOLOv7-DeepSORT.
After experimental evaluation, compared with the previous YOLOv5-DeepSORT, YOLOv7-DeepSORT performances better in tracking accuracy.
arXiv Detail & Related papers (2022-07-25T13:43:34Z) - DropTrack -- automatic droplet tracking using deep learning for
microfluidic applications [0.0]
One fundamental analysis frequently desired in microfluidic experiments is counting and tracking the droplets.
Here, two deep learning-based algorithms for object detection (YOLO) and object tracking (DeepSORT) are combined into a single image analysis tool, DropTrack.
DropTrack analyzes input videos, extracts droplets' trajectories, and infers other observables of interest, such as droplet numbers.
arXiv Detail & Related papers (2022-05-05T11:03:32Z) - A lightweight and accurate YOLO-like network for small target detection
in Aerial Imagery [94.78943497436492]
We present YOLO-S, a simple, fast and efficient network for small target detection.
YOLO-S exploits a small feature extractor based on Darknet20, as well as skip connection, via both bypass and concatenation.
YOLO-S has an 87% decrease of parameter size and almost one half FLOPs of YOLOv3, making practical the deployment for low-power industrial applications.
arXiv Detail & Related papers (2022-04-05T16:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.