YOLO-MS: Rethinking Multi-Scale Representation Learning for Real-time
Object Detection
- URL: http://arxiv.org/abs/2308.05480v1
- Date: Thu, 10 Aug 2023 10:12:27 GMT
- Title: YOLO-MS: Rethinking Multi-Scale Representation Learning for Real-time
Object Detection
- Authors: Yuming Chen, Xinbin Yuan, Ruiqi Wu, Jiabao Wang, Qibin Hou, Ming-Ming
Cheng
- Abstract summary: We provide an efficient and performant object detector, termed YOLO-MS.
We train our YOLO-MS on the MS COCO dataset from scratch without relying on any other large-scale datasets.
Our work can also be used as a plug-and-play module for other YOLO models.
- Score: 80.11152626362109
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We aim at providing the object detection community with an efficient and
performant object detector, termed YOLO-MS. The core design is based on a
series of investigations on how convolutions with different kernel sizes affect
the detection performance of objects at different scales. The outcome is a new
strategy that can strongly enhance multi-scale feature representations of
real-time object detectors. To verify the effectiveness of our strategy, we
build a network architecture, termed YOLO-MS. We train our YOLO-MS on the MS
COCO dataset from scratch without relying on any other large-scale datasets,
like ImageNet, or pre-trained weights. Without bells and whistles, our YOLO-MS
outperforms the recent state-of-the-art real-time object detectors, including
YOLO-v7 and RTMDet, when using a comparable number of parameters and FLOPs.
Taking the XS version of YOLO-MS as an example, with only 4.5M learnable
parameters and 8.7G FLOPs, it can achieve an AP score of 43%+ on MS COCO, which
is about 2%+ higher than RTMDet with the same model size. Moreover, our work
can also be used as a plug-and-play module for other YOLO models. Typically,
our method significantly improves the AP of YOLOv8 from 37%+ to 40%+ with even
fewer parameters and FLOPs. Code is available at
https://github.com/FishAndWasabi/YOLO-MS.
Related papers
- YOLOv10: Real-Time End-to-End Object Detection [68.28699631793967]
YOLOs have emerged as the predominant paradigm in the field of real-time object detection.
The reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs.
We introduce the holistic efficiency-accuracy driven model design strategy for YOLOs.
arXiv Detail & Related papers (2024-05-23T11:44:29Z) - MODIPHY: Multimodal Obscured Detection for IoT using PHantom Convolution-Enabled Faster YOLO [10.183459286746196]
We introduce YOLO Phantom, one of the smallest YOLO models ever conceived.
YOLO Phantom achieves comparable accuracy to the latest YOLOv8n model while simultaneously reducing both parameters and model size.
Its real-world efficacy is demonstrated on an IoT platform with advanced low-light and RGB cameras, seamlessly connecting to an AWS-based notification endpoint.
arXiv Detail & Related papers (2024-02-12T18:56:53Z) - YOLO-World: Real-Time Open-Vocabulary Object Detection [87.08732047660058]
We introduce YOLO-World, an innovative approach that enhances YOLO with open-vocabulary detection capabilities.
Our method excels in detecting a wide range of objects in a zero-shot manner with high efficiency.
YOLO-World achieves 35.4 AP with 52.0 FPS on V100, which outperforms many state-of-the-art methods in terms of both accuracy and speed.
arXiv Detail & Related papers (2024-01-30T18:59:38Z) - YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems [0.0873811641236639]
We present YOLOBench, a benchmark comprised of 550+ YOLO-based object detection models on 4 different datasets and 4 different embedded hardware platforms.
We collect accuracy and latency numbers for a variety of YOLO-based one-stage detectors at different model scales by performing a fair, controlled comparison of these detectors with a fixed training environment.
We evaluate training-free accuracy estimators used in neural architecture search on YOLOBench and demonstrate that, while most state-of-the-art zero-cost accuracy estimators are outperformed by a simple baseline like MAC count, some of them can be effectively used to
arXiv Detail & Related papers (2023-07-26T01:51:10Z) - EdgeYOLO: An Edge-Real-Time Object Detector [69.41688769991482]
This paper proposes an efficient, low-complexity and anchor-free object detector based on the state-of-the-art YOLO framework.
We develop an enhanced data augmentation method to effectively suppress overfitting during training, and design a hybrid random loss function to improve the detection accuracy of small objects.
Our baseline model can reach the accuracy of 50.6% AP50:95 and 69.8% AP50 in MS 2017 dataset, 26.4% AP50:95 and 44.8% AP50 in VisDrone 2019-DET dataset, and it meets real-time requirements (FPS>=30) on edge-computing device Nvidia
arXiv Detail & Related papers (2023-02-15T06:05:14Z) - YOLOv6: A Single-Stage Object Detection Framework for Industrial
Applications [16.047499394184985]
YOLOv6-N hits 35.9% AP on the COCO dataset at a throughput of 1234 FPS on an NVIDIA Tesla T4 GPU.
YOLOv6-S strikes 43.5% AP at 495 FPS, outperforming other mainstream detectors at the same scale.
YOLOv6-M/L achieves better accuracy performance (i.e., 49.5%/52.3%) than other detectors with a similar inference speed.
arXiv Detail & Related papers (2022-09-07T07:47:58Z) - A lightweight and accurate YOLO-like network for small target detection
in Aerial Imagery [94.78943497436492]
We present YOLO-S, a simple, fast and efficient network for small target detection.
YOLO-S exploits a small feature extractor based on Darknet20, as well as skip connection, via both bypass and concatenation.
YOLO-S has an 87% decrease of parameter size and almost one half FLOPs of YOLOv3, making practical the deployment for low-power industrial applications.
arXiv Detail & Related papers (2022-04-05T16:29:49Z) - Simultaneous Detection and Tracking with Motion Modelling for Multiple
Object Tracking [94.24393546459424]
We introduce Deep Motion Modeling Network (DMM-Net) that can estimate multiple objects' motion parameters to perform joint detection and association.
DMM-Net achieves PR-MOTA score of 12.80 @ 120+ fps for the popular UA-DETRAC challenge, which is better performance and orders of magnitude faster.
We also contribute a synthetic large-scale public dataset Omni-MOT for vehicle tracking that provides precise ground-truth annotations.
arXiv Detail & Related papers (2020-08-20T08:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.