Efficient Apple Maturity and Damage Assessment: A Lightweight Detection
Model with GAN and Attention Mechanism
- URL: http://arxiv.org/abs/2310.09347v1
- Date: Fri, 13 Oct 2023 18:22:30 GMT
- Title: Efficient Apple Maturity and Damage Assessment: A Lightweight Detection
Model with GAN and Attention Mechanism
- Authors: Yufei Liu, Manzhou Li, Qin Ma
- Abstract summary: This study proposes a method based on lightweight convolutional neural networks (CNN) and generative adversarial networks (GAN)
In apple ripeness grading detection, the proposed model achieves 95.6%, 93.8%, 95.0%, and 56.5 in precision, recall, accuracy, and FPS, respectively.
In apple damage level detection, the proposed model reaches 95.3%, 93.7%, and 94.5% in precision, recall, and mAP, respectively.
- Score: 7.742643088073472
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study proposes a method based on lightweight convolutional neural
networks (CNN) and generative adversarial networks (GAN) for apple ripeness and
damage level detection tasks. Initially, a lightweight CNN model is designed by
optimizing the model's depth and width, as well as employing advanced model
compression techniques, successfully reducing the model's parameter and
computational requirements, thus enhancing real-time performance in practical
applications. Simultaneously, attention mechanisms are introduced, dynamically
adjusting the importance of different feature layers to improve the performance
in object detection tasks. To address the issues of sample imbalance and
insufficient sample size, GANs are used to generate realistic apple images,
expanding the training dataset and enhancing the model's recognition capability
when faced with apples of varying ripeness and damage levels. Furthermore, by
applying the object detection network for damage location annotation on damaged
apples, the accuracy of damage level detection is improved, providing a more
precise basis for decision-making. Experimental results show that in apple
ripeness grading detection, the proposed model achieves 95.6\%, 93.8\%, 95.0\%,
and 56.5 in precision, recall, accuracy, and FPS, respectively. In apple damage
level detection, the proposed model reaches 95.3\%, 93.7\%, and 94.5\% in
precision, recall, and mAP, respectively. In both tasks, the proposed method
outperforms other mainstream models, demonstrating the excellent performance
and high practical value of the proposed method in apple ripeness and damage
level detection tasks.
Related papers
- Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for
Advanced Object Detection [55.2480439325792]
We present an in-depth evaluation of an object detection model that integrates the LSKNet backbone with the DiffusionDet head.
The proposed model achieves a mean average precision (MAP) of approximately 45.7%, which is a significant improvement.
This advancement underscores the effectiveness of the proposed modifications and sets a new benchmark in aerial image analysis.
arXiv Detail & Related papers (2023-11-21T19:49:13Z) - Exploring the Effectiveness of Dataset Synthesis: An application of
Apple Detection in Orchards [68.95806641664713]
We explore the usability of Stable Diffusion 2.1-base for generating synthetic datasets of apple trees for object detection.
We train a YOLOv5m object detection model to predict apples in a real-world apple detection dataset.
Results demonstrate that the model trained on generated data is slightly underperforming compared to a baseline model trained on real-world images.
arXiv Detail & Related papers (2023-06-20T09:46:01Z) - A Computer Vision Enabled damage detection model with improved YOLOv5
based on Transformer Prediction Head [0.0]
Current state-of-the-art deep learning (DL)-based damage detection models often lack superior feature extraction capability in complex and noisy environments.
DenseSPH-YOLOv5 is a real-time DL-based high-performance damage detection model where DenseNet blocks have been integrated with the backbone.
DenseSPH-YOLOv5 obtains a mean average precision (mAP) value of 85.25 %, F1-score of 81.18 %, and precision (P) value of 89.51 % outperforming current state-of-the-art models.
arXiv Detail & Related papers (2023-03-07T22:53:36Z) - A New Knowledge Distillation Network for Incremental Few-Shot Surface
Defect Detection [20.712532953953808]
This paper proposes a new knowledge distillation network, called Dual Knowledge Align Network (DKAN)
The proposed DKAN method follows a pretraining-finetuning transfer learning paradigm and a knowledge distillation framework is designed for fine-tuning.
Experiments have been conducted on the incremental Few-shot NEU-DET dataset and results show that DKAN outperforms other methods on various few-shot scenes.
arXiv Detail & Related papers (2022-09-01T15:08:44Z) - Adversarial Robustness Assessment of NeuroEvolution Approaches [1.237556184089774]
We evaluate the robustness of models found by two NeuroEvolution approaches on the CIFAR-10 image classification task.
Our results show that when the evolved models are attacked with iterative methods, their accuracy usually drops to, or close to, zero.
Some of these techniques can exacerbate the perturbations added to the original inputs, potentially harming robustness.
arXiv Detail & Related papers (2022-07-12T10:40:19Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - A fast accurate fine-grain object detection model based on YOLOv4 deep
neural network [0.0]
Early identification and prevention of various plant diseases in commercial farms and orchards is a key feature of precision agriculture technology.
This paper presents a high-performance real-time fine-grain object detection framework that addresses several obstacles in plant disease detection.
The proposed model is built on an improved version of the You Only Look Once (YOLOv4) algorithm.
arXiv Detail & Related papers (2021-10-30T17:56:13Z) - Effective Model Sparsification by Scheduled Grow-and-Prune Methods [73.03533268740605]
We propose a novel scheduled grow-and-prune (GaP) methodology without pre-training the dense models.
Experiments have shown that such models can match or beat the quality of highly optimized dense models at 80% sparsity on a variety of tasks.
arXiv Detail & Related papers (2021-06-18T01:03:13Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Gradients as a Measure of Uncertainty in Neural Networks [16.80077149399317]
We propose to utilize backpropagated gradients to quantify the uncertainty of trained models.
We show that our gradient-based method outperforms state-of-the-art methods by up to 4.8% of AUROC score in out-of-distribution detection.
arXiv Detail & Related papers (2020-08-18T16:58:46Z) - From Sound Representation to Model Robustness [82.21746840893658]
We investigate the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
Averaged over various experiments on three environmental sound datasets, we found the ResNet-18 model outperforms other deep learning architectures.
arXiv Detail & Related papers (2020-07-27T17:30:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.