Investigation to answer three key questions concerning plant pest identification and development of a practical identification framework
- URL: http://arxiv.org/abs/2407.18000v1
- Date: Thu, 25 Jul 2024 12:49:24 GMT
- Title: Investigation to answer three key questions concerning plant pest identification and development of a practical identification framework
- Authors: Ryosuke Wayama, Yuki Sasaki, Satoshi Kagiwada, Nobusuke Iwasaki, Hitoshi Iyatomi,
- Abstract summary: We develop an accurate, robust, and fast plant pest identification framework using 334K images.
Our two-stage plant pest identification framework achieved a highly practical performance of 91.0% and 88.5% in mean accuracy and macro F1 score.
- Score: 2.388418486046813
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The development of practical and robust automated diagnostic systems for identifying plant pests is crucial for efficient agricultural production. In this paper, we first investigate three key research questions (RQs) that have not been addressed thus far in the field of image-based plant pest identification. Based on the knowledge gained, we then develop an accurate, robust, and fast plant pest identification framework using 334K images comprising 78 combinations of four plant portions (the leaf front, leaf back, fruit, and flower of cucumber, tomato, strawberry, and eggplant) and 20 pest species captured at 27 farms. The results reveal the following. (1) For an appropriate evaluation of the model, the test data should not include images of the field from which the training images were collected, or other considerations to increase the diversity of the test set should be taken into account. (2) Pre-extraction of ROIs, such as leaves and fruits, helps to improve identification accuracy. (3) Integration of closely related species using the same control methods and cross-crop training methods for the same pests, are effective. Our two-stage plant pest identification framework, enabling ROI detection and convolutional neural network (CNN)-based identification, achieved a highly practical performance of 91.0% and 88.5% in mean accuracy and macro F1 score, respectively, for 12,223 instances of test data of 21 classes collected from unseen fields, where 25 classes of images from 318,971 samples were used for training; the average identification time was 476 ms/image.
Related papers
- PlantSeg: A Large-Scale In-the-wild Dataset for Plant Disease Segmentation [37.383095056084834]
Plant disease datasets typically lack segmentation labels.
Unlike typical datasets that contain images from laboratory settings, PlantSeg primarily comprises in-the-wild plant disease images.
PlantSeg is extensive, featuring 11,400 images with disease segmentation masks and an additional 8,000 healthy plant images categorized by plant type.
arXiv Detail & Related papers (2024-09-06T06:11:28Z) - High-Throughput Phenotyping using Computer Vision and Machine Learning [0.0]
We used a dataset provided by Oak Ridge National Laboratory with 1,672 images of Populus Trichocarpa with white labels displaying treatment.
Optical character recognition (OCR) was used to read these labels on the plants.
Machine learning models were used to predict treatment based on those classifications, and analyzed encoded EXIF tags were used for the purpose of finding leaf size and correlations between phenotypes.
arXiv Detail & Related papers (2024-07-08T19:46:31Z) - Semantic Image Segmentation with Deep Learning for Vine Leaf Phenotyping [59.0626764544669]
In this study, we use Deep Learning methods to semantically segment grapevine leaves images in order to develop an automated object detection system for leaf phenotyping.
Our work contributes to plant lifecycle monitoring through which dynamic traits such as growth and development can be captured and quantified.
arXiv Detail & Related papers (2022-10-24T14:37:09Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Leaf Image-based Plant Disease Identification using Color and Texture
Features [0.1657441317977376]
The accuracy on a self-collected dataset is 82.47% for disease identification and 91.40% for healthy and diseased classification.
This prototype system can be extended by adding more disease categories or targeting specific crop or disease categories.
arXiv Detail & Related papers (2021-02-08T20:32:56Z) - A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows
from UAV Imagery [56.10033255997329]
We propose a novel deep learning method based on a Convolutional Neural Network (CNN)
It simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations.
The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops.
arXiv Detail & Related papers (2020-12-31T18:51:17Z) - One-Shot Learning with Triplet Loss for Vegetation Classification Tasks [45.82374977939355]
Triplet loss function is one of the options that can significantly improve the accuracy of the One-shot Learning tasks.
Starting from 2015, many projects use Siamese networks and this kind of loss for face recognition and object classification.
arXiv Detail & Related papers (2020-12-14T10:44:22Z) - Real-time Plant Health Assessment Via Implementing Cloud-based Scalable
Transfer Learning On AWS DeepLens [0.8714677279673736]
We propose a machine learning approach to detect and classify plant leaf disease.
We use scalable transfer learning on AWS SageMaker and importing it on AWS DeepLens for real-time practical usability.
Our experiments on extensive image data set of healthy and unhealthy leaves of fruits and vegetables showed an accuracy of 98.78% with a real-time diagnosis of plant leaves diseases.
arXiv Detail & Related papers (2020-09-09T05:23:34Z) - Pollen13K: A Large Scale Microscope Pollen Grain Image Dataset [63.05335933454068]
This work presents the first large-scale pollen grain image dataset, including more than 13 thousands objects.
The paper focuses on the employed data acquisition steps, which include aerobiological sampling, microscope image acquisition, object detection, segmentation and labelling.
arXiv Detail & Related papers (2020-07-09T10:33:31Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.