Channel Pruned YOLOv5-based Deep Learning Approach for Rapid and
Accurate Outdoor Obstacles Detection
- URL: http://arxiv.org/abs/2204.13699v1
- Date: Wed, 27 Apr 2022 21:06:04 GMT
- Title: Channel Pruned YOLOv5-based Deep Learning Approach for Rapid and
Accurate Outdoor Obstacles Detection
- Authors: Zeqian Li, Keyu Qiu, Zhibin Yu
- Abstract summary: One-stage algorithm have been widely used in target detection systems that need to be trained with massive data.
Due to their convolutional structure, they need more computing power and greater memory consumption.
We apply pruning strategy to target detection networks to reduce the number of parameters and the size of model.
- Score: 6.703770367794502
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One-stage algorithm have been widely used in target detection systems that
need to be trained with massive data. Most of them perform well both in
real-time and accuracy. However, due to their convolutional structure, they
need more computing power and greater memory consumption. Hence, we applied
pruning strategy to target detection networks to reduce the number of
parameters and the size of model. To demonstrate the practicality of the
pruning method, we select the YOLOv5 model for experiments and provide a data
set of outdoor obstacles to show the effect of model. In this specific data
set, in the best circumstances, the volume of the network model is reduced by
49.7% compared with the original model, and the reasoning time is reduced by
52.5%. Meanwhile, it also uses data processing methods to compensate for the
drop in accuracy caused by pruning.
Related papers
- A lightweight YOLOv5-FFM model for occlusion pedestrian detection [1.62877896907106]
YOLO, as an efficient and simple one-stage target detection method, is often used for pedestrian detection in various environments.
In this paper, we propose an improved lightweight YOLOv5 model to deal with these problems.
This model can achieve better pedestrian detection accuracy with fewer floating-point operations (FLOPs), especially for occluded targets.
arXiv Detail & Related papers (2024-08-13T04:42:02Z) - Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - PUMA: margin-based data pruning [51.12154122266251]
We focus on data pruning, where some training samples are removed based on the distance to the model classification boundary (i.e., margin)
We propose PUMA, a new data pruning strategy that computes the margin using DeepFool.
We show that PUMA can be used on top of the current state-of-the-art methodology in robustness, and it is able to significantly improve the model performance unlike the existing data pruning strategies.
arXiv Detail & Related papers (2024-05-10T08:02:20Z) - From Blurry to Brilliant Detection: YOLOv5-Based Aerial Object Detection
with Super Resolution [4.107182710549721]
We present an innovative approach that combines super-resolution and an adapted lightweight YOLOv5 architecture.
Our experimental results demonstrate the model's superior performance in detecting small and densely clustered objects.
arXiv Detail & Related papers (2024-01-26T05:50:58Z) - Exploring the Effectiveness of Dataset Synthesis: An application of
Apple Detection in Orchards [68.95806641664713]
We explore the usability of Stable Diffusion 2.1-base for generating synthetic datasets of apple trees for object detection.
We train a YOLOv5m object detection model to predict apples in a real-world apple detection dataset.
Results demonstrate that the model trained on generated data is slightly underperforming compared to a baseline model trained on real-world images.
arXiv Detail & Related papers (2023-06-20T09:46:01Z) - FDINet: Protecting against DNN Model Extraction via Feature Distortion Index [25.69643512837956]
FDINET is a novel defense mechanism that leverages the feature distribution of deep neural network (DNN) models.
It exploits FDI similarity to identify colluding adversaries from distributed extraction attacks.
FDINET exhibits the capability to identify colluding adversaries with an accuracy exceeding 91%.
arXiv Detail & Related papers (2023-06-20T07:14:37Z) - Gradient-Free Structured Pruning with Unlabeled Data [57.999191898036706]
We propose a gradient-free structured pruning framework that uses only unlabeled data.
Up to 40% of the original FLOP count can be reduced with less than a 4% accuracy loss across all tasks considered.
arXiv Detail & Related papers (2023-03-07T19:12:31Z) - Transfer Learning in Deep Learning Models for Building Load Forecasting:
Case of Limited Data [0.0]
This paper proposes a Building-to-Building Transfer Learning framework to overcome the problem and enhance the performance of Deep Learning models.
The proposed approach improved the forecasting accuracy by 56.8% compared to the case of conventional deep learning where training from scratch is used.
arXiv Detail & Related papers (2023-01-25T16:05:47Z) - Dynamic Sparse Training via Balancing the Exploration-Exploitation
Trade-off [19.230329532065635]
Sparse training could significantly mitigate the training costs by reducing the model size.
Existing sparse training methods mainly use either random-based or greedy-based drop-and-grow strategies.
In this work, we consider the dynamic sparse training as a sparse connectivity search problem.
Experimental results show that sparse models (up to 98% sparsity) obtained by our proposed method outperform the SOTA sparse training methods.
arXiv Detail & Related papers (2022-11-30T01:22:25Z) - Effective Model Sparsification by Scheduled Grow-and-Prune Methods [73.03533268740605]
We propose a novel scheduled grow-and-prune (GaP) methodology without pre-training the dense models.
Experiments have shown that such models can match or beat the quality of highly optimized dense models at 80% sparsity on a variety of tasks.
arXiv Detail & Related papers (2021-06-18T01:03:13Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.