Deep Learning approach for Classifying Trusses and Runners of
Strawberries
- URL: http://arxiv.org/abs/2207.02721v1
- Date: Wed, 6 Jul 2022 14:48:35 GMT
- Title: Deep Learning approach for Classifying Trusses and Runners of
Strawberries
- Authors: Jakub Pomykala, Francisco de Lemos, Isibor Kennedy Ihianle, David Ada
Adama, Pedro Machado
- Abstract summary: This paper proposes the use of Deep Learning for the classification of trusses and runners of strawberry plants.
The proposed approach is based on the use of noises (i.e. Gaussian, Speckle, Poisson and Salt-and-Pepper) to artificially augment the dataset.
The results are evaluated using mean average of precision, recall and F1 score.
- Score: 1.6799377888527687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of artificial intelligence in the agricultural sector has been
growing at a rapid rate to automate farming activities. Emergent farming
technologies focus on mapping and classification of plants, fruits, diseases,
and soil types. Although, assisted harvesting and pruning applications using
deep learning algorithms are in the early development stages, there is a demand
for solutions to automate such processes. This paper proposes the use of Deep
Learning for the classification of trusses and runners of strawberry plants
using semantic segmentation and dataset augmentation. The proposed approach is
based on the use of noises (i.e. Gaussian, Speckle, Poisson and
Salt-and-Pepper) to artificially augment the dataset and compensate the low
number of data samples and increase the overall classification performance. The
results are evaluated using mean average of precision, recall and F1 score. The
proposed approach achieved 91\%, 95\% and 92\% on precision, recall and F1
score, respectively, for truss detection using the ResNet101 with dataset
augmentation utilising Salt-and-Pepper noise; and 83\%, 53\% and 65\% on
precision, recall and F1 score, respectively, for truss detection using the
ResNet50 with dataset augmentation utilising Poisson noise.
Related papers
- Dynamic Data Pruning for Automatic Speech Recognition [58.95758272440217]
We introduce Dynamic Data Pruning for ASR (DDP-ASR), which offers fine-grained pruning granularities specifically tailored for speech-related datasets.
Our experiments show that DDP-ASR can save up to 1.6x training time with negligible performance loss.
arXiv Detail & Related papers (2024-06-26T14:17:36Z) - A Meta-Learning Approach to Predicting Performance and Data Requirements [163.4412093478316]
We propose an approach to estimate the number of samples required for a model to reach a target performance.
We find that the power law, the de facto principle to estimate model performance, leads to large error when using a small dataset.
We introduce a novel piecewise power law (PPL) that handles the two data differently.
arXiv Detail & Related papers (2023-03-02T21:48:22Z) - Exploring the Value of Pre-trained Language Models for Clinical Named
Entity Recognition [6.917786124918387]
We compare Transformer models that are trained from scratch to fine-tuned BERT-based LLMs.
We examine the impact of an additional CRF layer on such models to encourage contextual learning.
arXiv Detail & Related papers (2022-10-23T16:27:31Z) - Comparing Machine Learning Techniques for Alfalfa Biomass Yield
Prediction [0.8808021343665321]
alfalfa crop is globally important as livestock feed, so highly efficient planting and harvesting could benefit many industries.
Recent work using machine learning to predict yields for alfalfa and other crops has shown promise.
Previous efforts used remote sensing, weather, planting, and soil data to train machine learning models for yield prediction.
arXiv Detail & Related papers (2022-10-20T13:00:33Z) - Patients' Severity States Classification based on Electronic Health
Record (EHR) Data using Multiple Machine Learning and Deep Learning
Approaches [0.8312466807725921]
This research presents an examination of categorizing the severity states of patients based on their electronic health records.
The suggested method uses an EHR dataset collected from an open-source platform to categorize severity.
arXiv Detail & Related papers (2022-09-29T16:14:02Z) - Transformers Can Do Bayesian Inference [56.99390658880008]
We present Prior-Data Fitted Networks (PFNs)
PFNs leverage in-context learning in large-scale machine learning techniques to approximate a large set of posteriors.
We demonstrate that PFNs can near-perfectly mimic Gaussian processes and also enable efficient Bayesian inference for intractable problems.
arXiv Detail & Related papers (2021-12-20T13:07:39Z) - A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows
from UAV Imagery [56.10033255997329]
We propose a novel deep learning method based on a Convolutional Neural Network (CNN)
It simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations.
The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops.
arXiv Detail & Related papers (2020-12-31T18:51:17Z) - Convolutional Neural Network for Elderly Wandering Prediction in Indoor
Scenarios [0.0]
This work proposes a way to detect the wandering activity of Alzheimer's patients from path data collected from non-intrusive indoor sensors around the house.
Due to the lack of adequate data, we've manually generated a dataset of 220 paths using our own developed application.
Wandering patterns in the literature are normally identified by visual features (such as loops or random movement)
arXiv Detail & Related papers (2020-12-23T21:27:37Z) - Uncertainty-aware Self-training for Text Classification with Few Labels [54.13279574908808]
We study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck.
We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network.
We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation can perform within 3% of fully supervised pre-trained language models.
arXiv Detail & Related papers (2020-06-27T08:13:58Z) - Ensemble Wrapper Subsampling for Deep Modulation Classification [70.91089216571035]
Subsampling of received wireless signals is important for relaxing hardware requirements as well as the computational cost of signal processing algorithms.
We propose a subsampling technique to facilitate the use of deep learning for automatic modulation classification in wireless communication systems.
arXiv Detail & Related papers (2020-05-10T06:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.