Automated Tomato Maturity Estimation Using an Optimized Residual Model with Pruning and Quantization Techniques
- URL: http://arxiv.org/abs/2503.10940v1
- Date: Thu, 13 Mar 2025 22:56:19 GMT
- Title: Automated Tomato Maturity Estimation Using an Optimized Residual Model with Pruning and Quantization Techniques
- Authors: Muhammad Waseem, Chung-Hsuan Huang, Muhammad Muzzammil Sajjad, Laraib Haider Naqvi, Yaqoob Majeed, Tanzeel Ur Rehman, Tayyaba Nadeem,
- Abstract summary: Tomato maturity plays a pivotal role in optimizing harvest timing and ensuring product quality.<n>Existing deep learning approaches, while accurate, are often too computationally demanding for practical use in resource-constrained agricultural settings.<n>This study aims to develop a computationally efficient tomato classification model using the ResNet-18 architecture optimized through transfer learning, pruning, and quantization techniques.
- Score: 1.123910458133809
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Tomato maturity plays a pivotal role in optimizing harvest timing and ensuring product quality, but current methods struggle to achieve high accuracy along computational efficiency simultaneously. Existing deep learning approaches, while accurate, are often too computationally demanding for practical use in resource-constrained agricultural settings. In contrast, simpler techniques fail to capture the nuanced features needed for precise classification. This study aims to develop a computationally efficient tomato classification model using the ResNet-18 architecture optimized through transfer learning, pruning, and quantization techniques. Our objective is to address the dual challenge of maintaining high accuracy while enabling real-time performance on low-power edge devices. Then, these models were deployed on an edge device to investigate their performance for tomato maturity classification. The quantized model achieved an accuracy of 97.81%, with an average classification time of 0.000975 seconds per image. The pruned and auto-tuned model also demonstrated significant improvements in deployment metrics, further highlighting the benefits of optimization techniques. These results underscore the potential for a balanced solution that meets the accuracy and efficiency demands of modern agricultural production, paving the way for practical, real-world deployment in resource-limited environments.
Related papers
- Data-Driven Surrogate Modeling Techniques to Predict the Effective Contact Area of Rough Surface Contact Problems [39.979007027634196]
The effective contact area plays a critical role in multi-physics phenomena such as wear, sealing, and thermal or electrical conduction.
This study proposes a surrogate modeling framework for predicting the effective contact area using fast-to-evaluate data-driven techniques.
arXiv Detail & Related papers (2025-04-24T08:15:46Z) - QGAPHEnsemble : Combining Hybrid QLSTM Network Ensemble via Adaptive Weighting for Short Term Weather Forecasting [0.0]
This research highlights the practical efficacy of employing advanced machine learning techniques.<n>Our model demonstrates a substantial improvement in the accuracy and reliability of meteorological predictions.<n>The paper highlights the importance of optimized ensemble techniques to improve the performance the given weather forecasting task.
arXiv Detail & Related papers (2025-01-18T20:18:48Z) - The Efficiency vs. Accuracy Trade-off: Optimizing RAG-Enhanced LLM Recommender Systems Using Multi-Head Early Exit [46.37267466656765]
This paper presents an optimization framework that combines Retrieval-Augmented Generation (RAG) with an innovative multi-head early exit architecture.<n>Our experiments demonstrate how this architecture effectively decreases time without sacrificing the accuracy needed for reliable recommendation delivery.
arXiv Detail & Related papers (2025-01-04T03:26:46Z) - Synergistic Development of Perovskite Memristors and Algorithms for Robust Analog Computing [53.77822620185878]
We propose a synergistic methodology to concurrently optimize perovskite memristor fabrication and develop robust analog DNNs.
We develop "BayesMulti", a training strategy utilizing BO-guided noise injection to improve the resistance of analog DNNs to memristor imperfections.
Our integrated approach enables use of analog computing in much deeper and wider networks, achieving up to 100-fold improvements.
arXiv Detail & Related papers (2024-12-03T19:20:08Z) - On Importance of Pruning and Distillation for Efficient Low Resource NLP [0.3958317527488535]
Large transformer models have revolutionized Natural Language Processing, leading to significant advances in tasks like text classification.
Efforts have been made to downsize and accelerate English models, but research in this area is scarce for low-resource languages.
In this study, we explore the case of the low-resource-topic-all-docv2 model as our baseline, we implement optimization techniques to reduce computation time and memory usage.
arXiv Detail & Related papers (2024-09-21T14:58:12Z) - Enhanced Droplet Analysis Using Generative Adversarial Networks [0.0]
This work develops an image generator named DropletGAN to generate images of droplets.
It is also used to develop a light droplet detector using the synthetic dataset.
To the best of our knowledge, this work stands as the first to employ a generative model for augmenting droplet detection.
arXiv Detail & Related papers (2024-02-24T21:20:53Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - LoRAPrune: Structured Pruning Meets Low-Rank Parameter-Efficient Fine-Tuning [56.88751562302793]
Low-rank adaption (LoRA) has emerged to fine-tune large language models (LLMs)
LoRAPrune is a new framework that delivers an accurate structured pruned model in a highly memory-efficient manner.
LoRAPrune achieves a reduction in perplexity by 4.81 on WikiText2 and 3.46 on PTB, while also decreasing memory usage by 52.6%.
arXiv Detail & Related papers (2023-05-28T15:15:48Z) - Towards Compute-Optimal Transfer Learning [82.88829463290041]
We argue that zero-shot structured pruning of pretrained models allows them to increase compute efficiency with minimal reduction in performance.
Our results show that pruning convolutional filters of pretrained models can lead to more than 20% performance improvement in low computational regimes.
arXiv Detail & Related papers (2023-04-25T21:49:09Z) - Hybrid quantum ResNet for car classification and its hyperparameter
optimization [0.0]
This paper presents a quantum-inspired hyperparameter optimization technique and a hybrid quantum-classical machine learning model for supervised learning.
We test our approaches in a car image classification task and demonstrate a full-scale implementation of the hybrid quantum ResNet model.
A classification accuracy of 0.97 was obtained by the hybrid model after 18 iterations, whereas the classical model achieved an accuracy of 0.92 after 75 iterations.
arXiv Detail & Related papers (2022-05-10T13:25:36Z) - Attentive Fine-Grained Structured Sparsity for Image Restoration [63.35887911506264]
N:M structured pruning has appeared as one of the effective and practical pruning approaches for making the model efficient with the accuracy constraint.
We propose a novel pruning method that determines the pruning ratio for N:M structured sparsity at each layer.
arXiv Detail & Related papers (2022-04-26T12:44:55Z) - Powerpropagation: A sparsity inducing weight reparameterisation [65.85142037667065]
We introduce Powerpropagation, a new weight- parameterisation for neural networks that leads to inherently sparse models.
Models trained in this manner exhibit similar performance, but have a distribution with markedly higher density at zero, allowing more parameters to be pruned safely.
Here, we combine Powerpropagation with a traditional weight-pruning technique as well as recent state-of-the-art sparse-to-sparse algorithms, showing superior performance on the ImageNet benchmark.
arXiv Detail & Related papers (2021-10-01T10:03:57Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.