Involution-Infused DenseNet with Two-Step Compression for Resource-Efficient Plant Disease Classification
- URL: http://arxiv.org/abs/2506.00735v1
- Date: Sat, 31 May 2025 22:43:23 GMT
- Title: Involution-Infused DenseNet with Two-Step Compression for Resource-Efficient Plant Disease Classification
- Authors: T. Ahmed, S. Jannat, Md. F. Islam, J. Noor,
- Abstract summary: This study proposes a two-step model compression approach integrating Weight Pruning and Knowledge Distillation.<n>The results demonstrate ResNet50s superior performance post-compression, achieving 99.55% and 98.99% accuracy on the PlantVillage and PaddyLeaf datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Agriculture is vital for global food security, but crops are vulnerable to diseases that impact yield and quality. While Convolutional Neural Networks (CNNs) accurately classify plant diseases using leaf images, their high computational demands hinder their deployment in resource-constrained settings such as smartphones, edge devices, and real-time monitoring systems. This study proposes a two-step model compression approach integrating Weight Pruning and Knowledge Distillation, along with the hybridization of DenseNet with Involutional Layers. Pruning reduces model size and computational load, while distillation improves the smaller student models performance by transferring knowledge from a larger teacher network. The hybridization enhances the models ability to capture spatial features efficiently. These compressed models are suitable for real-time applications, promoting precision agriculture through rapid disease identification and crop management. The results demonstrate ResNet50s superior performance post-compression, achieving 99.55% and 98.99% accuracy on the PlantVillage and PaddyLeaf datasets, respectively. The DenseNet-based model, optimized for efficiency, recorded 99.21% and 93.96% accuracy with a minimal parameter count. Furthermore, the hybrid model achieved 98.87% and 97.10% accuracy, supporting the practical deployment of energy-efficient devices for timely disease intervention and sustainable farming practices.
Related papers
- Efficient Federated Learning with Heterogeneous Data and Adaptive Dropout [62.73150122809138]
Federated Learning (FL) is a promising distributed machine learning approach that enables collaborative training of a global model using multiple edge devices.<n>We propose the FedDHAD FL framework, which comes with two novel methods: Dynamic Heterogeneous model aggregation (FedDH) and Adaptive Dropout (FedAD)<n>The combination of these two methods makes FedDHAD significantly outperform state-of-the-art solutions in terms of accuracy (up to 6.7% higher), efficiency (up to 2.02 times faster), and cost (up to 15.0% smaller)
arXiv Detail & Related papers (2025-07-14T16:19:00Z) - Detection and Classification of Diseases in Multi-Crop Leaves using LSTM and CNN Models [0.0]
Plant diseases pose a serious challenge to agriculture by reducing crop yield and affecting food quality.<n>This study applies Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) models to classify plant leaf diseases.
arXiv Detail & Related papers (2025-04-30T02:36:51Z) - Enhancing Leaf Disease Classification Using GAT-GCN Hybrid Model [0.23301643766310373]
This research presents a hybrid model combining Graph Attention Networks (GATs) and Graph Convolution Networks (GCNs) for leaf disease classification.<n>GCNs have been widely used for learning from graph-structured data, and GATs enhance this by incorporating attention mechanisms to focus on the most important neighbors.<n>The edge augmentation technique has introduced a significant degree of generalization in the detection capabilities of the model.
arXiv Detail & Related papers (2025-04-07T06:31:38Z) - Design and Implementation of FourCropNet: A CNN-Based System for Efficient Multi-Crop Disease Detection and Management [3.4161054453684705]
This study proposes FourCropNet, a novel deep learning model designed to detect diseases in multiple crops.<n>FourCropNet achieved the highest accuracy of 99.7% for Grape, 99.5% for Corn, and 95.3% for the combined dataset.
arXiv Detail & Related papers (2025-03-11T12:00:56Z) - Fine-tuning is Not Fine: Mitigating Backdoor Attacks in GNNs with Limited Clean Data [51.745219224707384]
Graph Neural Networks (GNNs) have achieved remarkable performance through their message-passing mechanism.<n>Recent studies have highlighted the vulnerability of GNNs to backdoor attacks.<n>In this paper, we propose a practical backdoor mitigation framework, denoted as GRAPHNAD.
arXiv Detail & Related papers (2025-01-10T10:16:35Z) - Improved Cotton Leaf Disease Classification Using Parameter-Efficient Deep Learning Framework [0.0]
Cotton crops, often called "white gold," face significant production challenges.<n>Deep learning and machine learning techniques have been explored to address this challenge.<n>We propose an innovative deep learning framework integrating a subset of trainable layers from MobileNet.
arXiv Detail & Related papers (2024-12-23T14:01:10Z) - Efficient Fine-Tuning and Concept Suppression for Pruned Diffusion Models [93.76814568163353]
We propose a novel bilevel optimization framework for pruned diffusion models.<n>This framework consolidates the fine-tuning and unlearning processes into a unified phase.<n>It is compatible with various pruning and concept unlearning methods.
arXiv Detail & Related papers (2024-12-19T19:13:18Z) - Automated Disease Diagnosis in Pumpkin Plants Using Advanced CNN Models [0.0]
Pumpkin is a vital crop cultivated globally, and its productivity is crucial for food security, especially in developing regions.
Recent advancements in machine learning and deep learning offer promising solutions for automating and improving the accuracy of plant disease detection.
This paper presents a comprehensive analysis of state-of-the-art Convolutional Neural Network (CNN) models for classifying diseases in pumpkin plant leaves.
arXiv Detail & Related papers (2024-09-29T14:31:23Z) - Efficient Point Cloud Classification via Offline Distillation Framework and Negative-Weight Self-Distillation Technique [46.266960248570086]
We introduce an innovative offline recording strategy that avoids the simultaneous loading of both teacher and student models.
This approach feeds a multitude of augmented samples into the teacher model, recording both the data augmentation parameters and the corresponding logit outputs.
Experimental results demonstrate that the proposed distillation strategy enables the student model to achieve performance comparable to state-of-the-art models.
arXiv Detail & Related papers (2024-09-03T16:12:12Z) - Effective Interplay between Sparsity and Quantization: From Theory to Practice [33.697590845745815]
We show how sparsity and quantization interact when combined together.<n>We show that even if applied in the correct order, the compounded errors from sparsity and quantization can significantly harm accuracy.<n>Our findings extend to the efficient deployment of large models in resource-constrained compute platforms.
arXiv Detail & Related papers (2024-05-31T15:34:13Z) - Enhanced Droplet Analysis Using Generative Adversarial Networks [0.0]
This work develops an image generator named DropletGAN to generate images of droplets.
It is also used to develop a light droplet detector using the synthetic dataset.
To the best of our knowledge, this work stands as the first to employ a generative model for augmenting droplet detection.
arXiv Detail & Related papers (2024-02-24T21:20:53Z) - Uncovering the Hidden Cost of Model Compression [43.62624133952414]
Visual Prompting has emerged as a pivotal method for transfer learning in computer vision.
Model compression detrimentally impacts the performance of visual prompting-based transfer.
However, negative effects on calibration are not present when models are compressed via quantization.
arXiv Detail & Related papers (2023-08-29T01:47:49Z) - Towards Simple and Accurate Human Pose Estimation with Stair Network [34.421529219040295]
We develop a small yet discrimicative model called STair Network, which can be stacked towards an accurate multi-stage pose estimation system.
To reduce computational cost, STair Network is composed of novel basic feature extraction blocks.
We demonstrate the effectiveness of the STair Network on two standard datasets.
arXiv Detail & Related papers (2022-02-18T10:37:13Z) - LCS: Learning Compressible Subspaces for Adaptive Network Compression at
Inference Time [57.52251547365967]
We propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models.
We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity.
Our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.
arXiv Detail & Related papers (2021-10-08T17:03:34Z) - Beyond Self-Supervision: A Simple Yet Effective Network Distillation
Alternative to Improve Backbones [40.33419553042038]
We propose to improve existing baseline networks via knowledge distillation from off-the-shelf pre-trained big powerful models.
Our solution performs distillation by only driving prediction of the student model consistent with that of the teacher model.
We empirically find that such simple distillation settings perform extremely effective, for example, the top-1 accuracy on ImageNet-1k validation set of MobileNetV3-large and ResNet50-D can be significantly improved.
arXiv Detail & Related papers (2021-03-10T09:32:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.