Lightweight Model for Poultry Disease Detection from Fecal Images Using Multi-Color Space Feature Optimization and Machine Learning
- URL: http://arxiv.org/abs/2507.10056v1
- Date: Mon, 14 Jul 2025 08:40:46 GMT
- Title: Lightweight Model for Poultry Disease Detection from Fecal Images Using Multi-Color Space Feature Optimization and Machine Learning
- Authors: A. K. M. Shoriful Islam, Md. Rakib Hassan, Macbah Uddin, Md. Shahidur Rahman,
- Abstract summary: Poultry farming is a vital component of the global food supply chain, yet it remains highly vulnerable to infectious diseases.<n>This study proposes a lightweight machine learning-based approach to detect these diseases by analyzing poultry fecal images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Poultry farming is a vital component of the global food supply chain, yet it remains highly vulnerable to infectious diseases such as coccidiosis, salmonellosis, and Newcastle disease. This study proposes a lightweight machine learning-based approach to detect these diseases by analyzing poultry fecal images. We utilize multi-color space feature extraction (RGB, HSV, LAB) and explore a wide range of color, texture, and shape-based descriptors, including color histograms, local binary patterns (LBP), wavelet transforms, and edge detectors. Through a systematic ablation study and dimensionality reduction using PCA and XGBoost feature selection, we identify a compact global feature set that balances accuracy and computational efficiency. An artificial neural network (ANN) classifier trained on these features achieved 95.85% accuracy while requiring no GPU and only 638 seconds of execution time in Google Colab. Compared to deep learning models such as Xception and MobileNetV3, our proposed model offers comparable accuracy with drastically lower resource usage. This work demonstrates a cost-effective, interpretable, and scalable alternative to deep learning for real-time poultry disease detection in low-resource agricultural settings.
Related papers
- A Lightweight and Robust Framework for Real-Time Colorectal Polyp Detection Using LOF-Based Preprocessing and YOLO-v11n [0.3495246564946556]
This study introduces a new, lightweight, and efficient framework for polyp detection.<n>It combines the Local Outlier Factor algorithm for filtering noisy data with the YOLO-v11n deep learning model.<n>Compared to previous YOLO-based methods, our model demonstrates enhanced accuracy and efficiency.
arXiv Detail & Related papers (2025-07-14T23:36:54Z) - Automated Multi-Class Crop Pathology Classification via Convolutional Neural Networks: A Deep Learning Approach for Real-Time Precision Agriculture [0.0]
This research introduces a Convolutional Neural Network (CNN)-based image classification system designed to automate the detection and classification of eight common crop diseases.<n>The solution is deployed on an open-source, mobile-compatible platform, enabling real-time image-based diagnostics for farmers in remote areas.
arXiv Detail & Related papers (2025-07-12T18:45:50Z) - PixCell: A generative foundation model for digital histopathology images [49.00921097924924]
We introduce PixCell, the first diffusion-based generative foundation model for histopathology.<n>We train PixCell on PanCan-30M, a vast, diverse dataset derived from 69,184 H&E-stained whole slide images covering various cancer types.
arXiv Detail & Related papers (2025-06-05T15:14:32Z) - SugarViT -- Multi-objective Regression of UAV Images with Vision
Transformers and Deep Label Distribution Learning Demonstrated on Disease
Severity Prediction in Sugar Beet [3.2925222641796554]
This work will introduce a machine learning framework for automatized large-scale plant-specific trait annotation.
We develop an efficient Vision Transformer based model for disease severity scoring called SugarViT.
Although the model is evaluated on this special use case, it is held as generic as possible to also be applicable to various image-based classification and regression tasks.
arXiv Detail & Related papers (2023-11-06T13:01:17Z) - Building Flyweight FLIM-based CNNs with Adaptive Decoding for Object
Detection [40.97322222472642]
This work presents a method to build a Convolutional Neural Network (CNN) layer by layer for object detection from user-drawn markers.
We address the detection of Schistosomiasis mansoni eggs in microscopy images of fecal samples, and the detection of ships in satellite images.
Our CNN weighs thousands of times less than SOTA object detectors, being suitable for CPU execution and showing superior or equivalent performance to three methods in five measures.
arXiv Detail & Related papers (2023-06-26T16:48:20Z) - A fast accurate fine-grain object detection model based on YOLOv4 deep
neural network [0.0]
Early identification and prevention of various plant diseases in commercial farms and orchards is a key feature of precision agriculture technology.
This paper presents a high-performance real-time fine-grain object detection framework that addresses several obstacles in plant disease detection.
The proposed model is built on an improved version of the You Only Look Once (YOLOv4) algorithm.
arXiv Detail & Related papers (2021-10-30T17:56:13Z) - Lung Cancer Lesion Detection in Histopathology Images Using Graph-Based
Sparse PCA Network [93.22587316229954]
We propose a graph-based sparse principal component analysis (GS-PCA) network, for automated detection of cancerous lesions on histological lung slides stained by hematoxylin and eosin (H&E)
We evaluate the performance of the proposed algorithm on H&E slides obtained from an SVM K-rasG12D lung cancer mouse model using precision/recall rates, F-score, Tanimoto coefficient, and area under the curve (AUC) of the receiver operator characteristic (ROC)
arXiv Detail & Related papers (2021-10-27T19:28:36Z) - Artificial intelligence for detection and quantification of rust and
leaf miner in coffee crop [0.0]
We create an algorithm capable of detecting rust (Hemileia vastatrix) and leaf miner (Leucoptera coffeella) in coffee leaves.
We quantify disease severity using a mobile application as a high-level interface for the model inferences.
arXiv Detail & Related papers (2021-03-20T20:52:11Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - FocusLiteNN: High Efficiency Focus Quality Assessment for Digital
Pathology [42.531674974834544]
We propose a CNN-based model that maintains fast computations similar to the knowledge-driven methods without excessive hardware requirements.
We create a training dataset using FocusPath which encompasses diverse tissue slides across nine different stain colors.
In our attempt to reduce the CNN complexity, we find with surprise that even trimming down the CNN to the minimal level, it still achieves a highly competitive performance.
arXiv Detail & Related papers (2020-07-11T20:52:01Z) - An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization [45.00998416720726]
We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
arXiv Detail & Related papers (2020-02-13T15:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.