Image-Based Classification of Olive Species Specific to Turkiye with Deep Neural Networks
- URL: http://arxiv.org/abs/2603.00168v1
- Date: Thu, 26 Feb 2026 09:58:04 GMT
- Title: Image-Based Classification of Olive Species Specific to Turkiye with Deep Neural Networks
- Authors: Irfan Atabas, Hatice Karatas,
- Abstract summary: The EfficientNetB0 model exhibited the optimal performance, with an accuracy of 94.5%.<n>Deep learning-based systems offer an effective solution for classifying olive species with high accuracy.<n>The developed method has significant potential for application in areas such as automatic identification and quality control of agricultural products.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, image processing and deep learning methodologies were employed to automatically classify local olive species cultivated in Turkiye. A stereo camera was utilized to capture images of five distinct olive species, which were then preprocessed to ensure their suitability for analysis. Convolutional Neural Network (CNN) architectures, specifically MobileNetV2 and EfficientNetB0, were employed for image classification. These models were optimized through a transfer learning approach. The training and testing results indicated that the EfficientNetB0 model exhibited the optimal performance, with an accuracy of 94.5%. The findings demonstrate that deep learning-based systems offer an effective solution for classifying olive species with high accuracy. The developed method has significant potential for application in areas such as automatic identification and quality control of agricultural products.
Related papers
- Image-Based Classification of Olive Varieties Native to Turkiye Using Multiple Deep Learning Architectures: Analysis of Performance, Complexity, and Generalization [0.0]
This study compares multiple deep learning architectures for the automated, image-based classification of five locally cultivated black table olive varieties in Turkey.<n>Ten architectures - MobileNetV2, EfficientNetB0, EfficientNetV2-S, ResNet50, ResNet101, DenseNet121, InceptionV3, ConvNeXt-Tiny, ViT-B16, and Swin-T - were trained using transfer learning.<n> EfficientNetV2-S achieved the highest classification accuracy (95.8%), while EfficientNetB0 provided the best trade-off between accuracy and computational complexity.
arXiv Detail & Related papers (2026-02-20T07:26:11Z) - Enhancing Histopathological Image Classification via Integrated HOG and Deep Features with Robust Noise Performance [0.0]
This study evaluates the classification performance of machine learning and deep learning models on the LC25000 dataset.<n>Fine-tuned InceptionResNet-v2 achieved a classification accuracy of 96.01% and an average AUC of 96.8%.<n>Models trained on deep features from InceptionResNet-v2 outperformed those using only the pre-trained network.
arXiv Detail & Related papers (2026-01-03T03:33:10Z) - Deep Learning for Automated Identification of Vietnamese Timber Species: A Tool for Ecological Monitoring and Conservation [2.1466764570532004]
In this study, we explore the application of deep learning to automate the classification of ten wood species commonly found in Vietnam.<n>A custom image dataset was constructed from field-collected wood samples, and five state-of-the-art convolutional neural network architectures were evaluated.<n> ShuffleNetV2 achieved the best balance between classification performance and computational efficiency, with an average accuracy of 99.29% and F1-score of 99.35% over 20 independent runs.
arXiv Detail & Related papers (2025-08-13T02:54:58Z) - Self-Supervised Learning in Deep Networks: A Pathway to Robust Few-Shot Classification [0.0]
We first pre-train the model with self-supervision to enable it to learn common feature expressions on a large amount of unlabeled data.
Then fine-tune it on the few-shot dataset Mini-ImageNet to improve the model's accuracy and generalization ability under limited data.
arXiv Detail & Related papers (2024-11-19T01:01:56Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [73.6767681305851]
Blind image quality assessment (IQA) in the wild presents significant challenges.<n>Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.<n>Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - A Light-weight Deep Learning Model for Remote Sensing Image
Classification [70.66164876551674]
We present a high-performance and light-weight deep learning model for Remote Sensing Image Classification (RSIC)
By conducting extensive experiments on the NWPU-RESISC45 benchmark, our proposed teacher-student models outperforms the state-of-the-art systems.
arXiv Detail & Related papers (2023-02-25T09:02:01Z) - NCTV: Neural Clamping Toolkit and Visualization for Neural Network
Calibration [66.22668336495175]
A lack of consideration for neural network calibration will not gain trust from humans.
We introduce the Neural Clamping Toolkit, the first open-source framework designed to help developers employ state-of-the-art model-agnostic calibrated models.
arXiv Detail & Related papers (2022-11-29T15:03:05Z) - Facilitated machine learning for image-based fruit quality assessment in
developing countries [68.8204255655161]
Automated image classification is a common task for supervised machine learning in food science.
We propose an alternative method based on pre-trained vision transformers (ViTs)
It can be easily implemented with limited resources on a standard device.
arXiv Detail & Related papers (2022-07-10T19:52:20Z) - Self-Denoising Neural Networks for Few Shot Learning [66.38505903102373]
We present a new training scheme that adds noise at multiple stages of an existing neural architecture while simultaneously learning to be robust to this added noise.
This architecture, which we call a Self-Denoising Neural Network (SDNN), can be applied easily to most modern convolutional neural architectures.
arXiv Detail & Related papers (2021-10-26T03:28:36Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Classification of Seeds using Domain Randomization on Self-Supervised
Learning Frameworks [0.0]
Key bottleneck is the need for an extensive amount of labelled data to train the convolutional neural networks (CNN)
The work leverages the concepts of Contrastive Learning and Domain Randomi-zation in order to achieve the same.
The use of synthetic images generated from a representational sample crop of real-world images alleviates the need for a large volume of test subjects.
arXiv Detail & Related papers (2021-03-29T12:50:06Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.