Multi-growth stage plant recognition: a case study of Palmer amaranth
(Amaranthus palmeri) in cotton (Gossypium hirsutum)
- URL: http://arxiv.org/abs/2307.15816v1
- Date: Fri, 28 Jul 2023 21:14:43 GMT
- Title: Multi-growth stage plant recognition: a case study of Palmer amaranth
(Amaranthus palmeri) in cotton (Gossypium hirsutum)
- Authors: Guy RY Coleman, Matthew Kutugata, Michael J Walsh, Muthukumar
Bagavathiannan
- Abstract summary: We investigate eight-class growth stage recognition of Amaranthus palmeri in cotton.
We compare 26 different architecture variants from YOLO v3, v5, v6, v6 3.0, v7, and v8.
Highest mAP@[0.5:0.95] for recognition of all growth stage classes was 47.34% achieved by v8-X.
- Score: 0.3441021278275805
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many advanced, image-based precision agricultural technologies for plant
breeding, field crop research, and site-specific crop management hinge on the
reliable detection and phenotyping of plants across highly variable
morphological growth stages. Convolutional neural networks (CNNs) have shown
promise for image-based plant phenotyping and weed recognition, but their
ability to recognize growth stages, often with stark differences in appearance,
is uncertain. Amaranthus palmeri (Palmer amaranth) is a particularly
challenging weed plant in cotton (Gossypium hirsutum) production, exhibiting
highly variable plant morphology both across growth stages over a growing
season, as well as between plants at a given growth stage due to high genetic
diversity. In this paper, we investigate eight-class growth stage recognition
of A. palmeri in cotton as a challenging model for You Only Look Once (YOLO)
architectures. We compare 26 different architecture variants from YOLO v3, v5,
v6, v6 3.0, v7, and v8 on an eight-class growth stage dataset of A. palmeri.
The highest mAP@[0.5:0.95] for recognition of all growth stage classes was
47.34% achieved by v8-X, with inter-class confusion across visually similar
growth stages. With all growth stages grouped as a single class, performance
increased, with a maximum mean average precision (mAP@[0.5:0.95]) of 67.05%
achieved by v7-Original. Single class recall of up to 81.42% was achieved by
v5-X, and precision of up to 89.72% was achieved by v8-X. Class activation maps
(CAM) were used to understand model attention on the complex dataset. Fewer
classes, grouped by visual or size features improved performance over the
ground-truth eight-class dataset. Successful growth stage detection highlights
the substantial opportunity for improving plant phenotyping and weed
recognition technologies with open-source object detection architectures.
Related papers
- Investigation to answer three key questions concerning plant pest identification and development of a practical identification framework [2.388418486046813]
We develop an accurate, robust, and fast plant pest identification framework using 334K images.
Our two-stage plant pest identification framework achieved a highly practical performance of 91.0% and 88.5% in mean accuracy and macro F1 score.
arXiv Detail & Related papers (2024-07-25T12:49:24Z) - From Seedling to Harvest: The GrowingSoy Dataset for Weed Detection in Soy Crops via Instance Segmentation [0.2605569739850177]
We introduce a comprehensive dataset for training neural networks to detect weeds and soy plants through instance segmentation.
Our dataset covers various stages of soy growth, offering a chronological perspective on weed invasion's impact.
We also provide 6 state of the art models, trained in this dataset, that can understand and detect soy and weed in every stage of the plantation process.
arXiv Detail & Related papers (2024-06-01T06:12:48Z) - Agave crop segmentation and maturity classification with deep learning
data-centric strategies using very high-resolution satellite imagery [101.18253437732933]
We present an Agave tequilana Weber azul crop segmentation and maturity classification using very high resolution satellite imagery.
We solve real-world deep learning problems in the very specific context of agave crop segmentation.
With the resulting accurate models, agave production forecasting can be made available for large regions.
arXiv Detail & Related papers (2023-03-21T03:15:29Z) - Semantic Image Segmentation with Deep Learning for Vine Leaf Phenotyping [59.0626764544669]
In this study, we use Deep Learning methods to semantically segment grapevine leaves images in order to develop an automated object detection system for leaf phenotyping.
Our work contributes to plant lifecycle monitoring through which dynamic traits such as growth and development can be captured and quantified.
arXiv Detail & Related papers (2022-10-24T14:37:09Z) - Assessing The Performance of YOLOv5 Algorithm for Detecting Volunteer
Cotton Plants in Corn Fields at Three Different Growth Stages [5.293431074053198]
The Texas Boll Weevil Eradication Program (TBWEP) employs people to locate and eliminate VC plants growing by the side of roads or fields with rotation crops.
In this paper, we demonstrate the application of computer vision (CV) algorithm based on You Only Look Once version 5 (YOLOv5) for detecting VC plants growing in the middle of corn fields.
arXiv Detail & Related papers (2022-07-31T21:03:40Z) - The Power of Transfer Learning in Agricultural Applications: AgriNet [1.9087335681007478]
We propose AgriNet dataset, a collection of 160k agricultural images from more than 19 geographical locations.
We also introduce AgriNet models, a set of pretrained models on five ImageNet architectures.
All proposed models were found to accurately classify the 423 classes of plant species, diseases, pests, and weeds with a minimum accuracy of 87%.
arXiv Detail & Related papers (2022-07-08T13:15:16Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows
from UAV Imagery [56.10033255997329]
We propose a novel deep learning method based on a Convolutional Neural Network (CNN)
It simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations.
The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops.
arXiv Detail & Related papers (2020-12-31T18:51:17Z) - One-Shot Learning with Triplet Loss for Vegetation Classification Tasks [45.82374977939355]
Triplet loss function is one of the options that can significantly improve the accuracy of the One-shot Learning tasks.
Starting from 2015, many projects use Siamese networks and this kind of loss for face recognition and object classification.
arXiv Detail & Related papers (2020-12-14T10:44:22Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z) - Automatic Plant Image Identification of Vietnamese species using Deep
Learning Models [0.0]
The Vietnamese plant image dataset was collected from an online encyclopedia of Vietnamese organisms, together with the Encyclopedia of Life.
Four deep convolutional feature extraction models, which are MobileNetV2, VGG16, ResnetV2, and Inception Resnet V2, are presented.
The proposed models achieve promising recognition rates, and MobilenetV2 attained the highest with 83.9%.
arXiv Detail & Related papers (2020-05-05T09:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.