Deep Convolutional Neural Networks for Palm Fruit Maturity Classification
- URL: http://arxiv.org/abs/2502.20223v1
- Date: Thu, 27 Feb 2025 16:06:30 GMT
- Title: Deep Convolutional Neural Networks for Palm Fruit Maturity Classification
- Authors: Mingqiang Han, Chunlin Yi,
- Abstract summary: This project aims to develop an automated computer vision system capable of accurately classifying palm fruit images into five ripeness levels.<n>We employ deep Convolutional Neural Networks (CNNs) to classify palm fruit images based on their maturity stage.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: To maximize palm oil yield and quality, it is essential to harvest palm fruit at the optimal maturity stage. This project aims to develop an automated computer vision system capable of accurately classifying palm fruit images into five ripeness levels. We employ deep Convolutional Neural Networks (CNNs) to classify palm fruit images based on their maturity stage. A shallow CNN serves as the baseline model, while transfer learning and fine-tuning are applied to pre-trained ResNet50 and InceptionV3 architectures. The study utilizes a publicly available dataset of over 8,000 images with significant variations, which is split into 80\% for training and 20\% for testing. The proposed deep CNN models achieve test accuracies exceeding 85\% in classifying palm fruit maturity stages. This research highlights the potential of deep learning for automating palm fruit ripeness assessment, which can contribute to optimizing harvesting decisions and improving palm oil production efficiency.
Related papers
- Classifying Healthy and Defective Fruits with a Multi-Input Architecture and CNN Models [0.0]
The primary aim is to enhance the accuracy of CNN models.
Results reveal that the inclusion of silhouette images alongside the Multi-Input architecture yields models with superior performance.
arXiv Detail & Related papers (2024-10-14T21:37:12Z) - Combined CNN and ViT features off-the-shelf: Another astounding baseline for recognition [49.14350399025926]
We apply pre-trained architectures, originally developed for the ImageNet Large Scale Visual Recognition Challenge, for periocular recognition.
Middle-layer features from CNNs and ViTs are a suitable way to recognize individuals based on periocular images.
arXiv Detail & Related papers (2024-07-28T11:52:36Z) - Fruit Classification System with Deep Learning and Neural Architecture Search [0.9217021281095907]
The study identified a total of 15 distinct categories of fruit, consisting of class Avocado, Banana, Cherry, Apple Braeburn, Apple golden 1, Apricot, Grape, Kiwi, Mango, Orange, Papaya, Peach, Pineapple, Pomegranate and Strawberry.
Our suggested model with 99.98% mAP increased the detection performance of the preceding research study that used Fruit datasets.
arXiv Detail & Related papers (2024-06-04T00:41:47Z) - Convolutional Neural Network Ensemble Learning for Hyperspectral
Imaging-based Blackberry Fruit Ripeness Detection in Uncontrolled Farm
Environment [4.292727554656705]
This paper proposes a novel multi-input convolutional neural network (CNN) ensemble classifier for detecting subtle traits of ripeness in blackberry fruits.
The proposed model achieved 95.1% accuracy on unseen sets and 90.2% accuracy with in-field conditions.
arXiv Detail & Related papers (2024-01-09T12:00:17Z) - Fruit Ripeness Classification: a Survey [59.11160990637616]
Many automatic methods have been proposed that employ a variety of feature descriptors for the food item to be graded.
Machine learning and deep learning techniques dominate the top-performing methods.
Deep learning can operate on raw data and thus relieve the users from having to compute complex engineered features.
arXiv Detail & Related papers (2022-12-29T19:32:20Z) - Fruit Quality Assessment with Densely Connected Convolutional Neural
Network [0.0]
We have exploited the concept of Densely Connected Convolutional Neural Networks (DenseNets) for fruit quality assessment.
The proposed pipeline achieved a remarkable accuracy of 99.67%.
The robustness of the model was further tested for fruit classification and quality assessment tasks where the model produced a similar performance.
arXiv Detail & Related papers (2022-12-08T13:11:47Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - Facilitated machine learning for image-based fruit quality assessment in
developing countries [68.8204255655161]
Automated image classification is a common task for supervised machine learning in food science.
We propose an alternative method based on pre-trained vision transformers (ViTs)
It can be easily implemented with limited resources on a standard device.
arXiv Detail & Related papers (2022-07-10T19:52:20Z) - Measuring the Ripeness of Fruit with Hyperspectral Imaging and Deep
Learning [14.853897011640022]
We present a system to measure the ripeness of fruit with a hyperspectral camera and a suitable deep neural network architecture.
This architecture did outperform competitive baseline models on the prediction of the state of ripeness.
arXiv Detail & Related papers (2021-04-20T07:43:19Z) - Fusion of CNNs and statistical indicators to improve image
classification [65.51757376525798]
Convolutional Networks have dominated the field of computer vision for the last ten years.
Main strategy to prolong this trend relies on further upscaling networks in size.
We hypothesise that adding heterogeneous sources of information may be more cost-effective to a CNN than building a bigger network.
arXiv Detail & Related papers (2020-12-20T23:24:31Z) - Learning CNN filters from user-drawn image markers for coconut-tree
image classification [78.42152902652215]
We present a method that needs a minimal set of user-selected images to train the CNN's feature extractor.
The method learns the filters of each convolutional layer from user-drawn markers in image regions that discriminate classes.
It does not rely on optimization based on backpropagation, and we demonstrate its advantages on the binary classification of coconut-tree aerial images.
arXiv Detail & Related papers (2020-08-08T15:50:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.