Fruit Fly Classification (Diptera: Tephritidae) in Images, Applying Transfer Learning
- URL: http://arxiv.org/abs/2502.00939v1
- Date: Sun, 02 Feb 2025 22:16:04 GMT
- Title: Fruit Fly Classification (Diptera: Tephritidae) in Images, Applying Transfer Learning
- Authors: Erick Andrew Bustamante Flores, Harley Vera Olivera, Ivan Cesar Medrano Valencia, Carlos Fernando Montoya Cubas,
- Abstract summary: This study develops a transfer learning model for the automated classification of two species of fruit flies.
Inception-v3 is an effective and replicable approach for classifying Anastrepha fraterculus and Ceratitis capitata.
- Score: 8.700842317740943
- License:
- Abstract: This study develops a transfer learning model for the automated classification of two species of fruit flies, Anastrepha fraterculus and Ceratitis capitata, in a controlled laboratory environment. The research addresses the need to optimize identification and classification, which are currently performed manually by experts, being affected by human factors and facing time challenges. The methodological process of this study includes the capture of high-quality images using a mobile phone camera and a stereo microscope, followed by segmentation to reduce size and focus on relevant morphological areas. The images were carefully labeled and preprocessed to ensure the quality and consistency of the dataset used to train the pre-trained convolutional neural network models VGG16, VGG19, and Inception-v3. The results were evaluated using the F1-score, achieving 82% for VGG16 and VGG19, while Inception-v3 reached an F1-score of 93%. Inception-v3's reliability was verified through model testing in uncontrolled environments, with positive results, complemented by the Grad-CAM technique, demonstrating its ability to capture essential morphological features. These findings indicate that Inception-v3 is an effective and replicable approach for classifying Anastrepha fraterculus and Ceratitis capitata, with potential for implementation in automated monitoring systems.
Related papers
- Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Stacking Ensemble Learning in Deep Domain Adaptation for Ophthalmic
Image Classification [61.656149405657246]
Domain adaptation is effective in image classification tasks where obtaining sufficient label data is challenging.
We propose a novel method, named SELDA, for stacking ensemble learning via extending three domain adaptation methods.
The experimental results using Age-Related Eye Disease Study (AREDS) benchmark ophthalmic dataset demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2022-09-27T14:19:00Z) - DeFungi: Direct Mycological Examination of Microscopic Fungi Images [0.0]
This paper presents experimental results classifying five fungi types using two different deep learning approaches and three different convolutional neural network models.
The best performing model trained from scratch was Inception V3, reporting 73.2% accuracy.
The dataset built is published in Kaggle and GitHub to foster future research.
arXiv Detail & Related papers (2021-09-15T14:25:28Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Ensemble of CNN classifiers using Sugeno Fuzzy Integral Technique for
Cervical Cytology Image Classification [1.6986898305640261]
We propose a fully automated computer-aided diagnosis tool for classifying single-cell and slide images of cervical cancer.
We use the Sugeno Fuzzy Integral to ensemble the decision scores from three popular deep learning models, namely, Inception v3, DenseNet-161 and ResNet-34.
arXiv Detail & Related papers (2021-08-21T08:41:41Z) - A Systematic Evaluation of Domain Adaptation in Facial Expression
Recognition [0.0]
This paper provides a systematic evaluation of domain adaptation in facial expression recognition.
We use state-of-the-art transfer learning techniques and six commonly-used facial expression datasets.
We find sobering results that the accuracy of transfer learning is not high, and varies idiosyncratically with the target dataset.
arXiv Detail & Related papers (2021-06-29T14:41:19Z) - Efficient Self-supervised Vision Transformers for Representation
Learning [86.57557009109411]
We show that multi-stage architectures with sparse self-attentions can significantly reduce modeling complexity.
We propose a new pre-training task of region matching which allows the model to capture fine-grained region dependencies.
Our results show that combining the two techniques, EsViT achieves 81.3% top-1 on the ImageNet linear probe evaluation.
arXiv Detail & Related papers (2021-06-17T19:57:33Z) - Interpreting Deep Learning Models for Epileptic Seizure Detection on EEG
signals [4.748221780751802]
Deep Learning (DL) is often considered the state-of-the art for Artificial Intelligence-based medical decision support.
It remains sparsely implemented in clinical practice and poorly trusted by clinicians due to insufficient interpretability of neural network models.
We have tackled this issue by developing interpretable DL models in the context of online detection of epileptic seizure, based on EEG signal.
arXiv Detail & Related papers (2020-12-22T11:10:23Z) - Automated Testing for Deep Learning Systems with Differential Behavior
Criteria [5.653421430985333]
We conducted a study on building an automated testing system for deep learning systems based on differential behavior criteria.
By observing differential behaviors from three pre-trained models during each testing iteration, the input image that triggered erroneous feedback was registered as a corner-case.
We explored other approaches based on differential behavior criteria to generate photo-realistic images for deep learning systems.
arXiv Detail & Related papers (2019-12-31T10:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.