WeedVision: Multi-Stage Growth and Classification of Weeds using DETR and RetinaNet for Precision Agriculture
- URL: http://arxiv.org/abs/2502.14890v1
- Date: Sun, 16 Feb 2025 20:49:22 GMT
- Title: WeedVision: Multi-Stage Growth and Classification of Weeds using DETR and RetinaNet for Precision Agriculture
- Authors: Taminul Islam, Toqi Tahamid Sarker, Khaled R Ahmed, Cristiana Bernardi Rankrape, Karla Gage,
- Abstract summary: This research uses object detection models to identify and classify 16 weed species of economic concern across 174 classes.<n>A robust dataset comprising 203,567 images was developed, meticulously labeled by species and growth stage.<n>RetinaNet demonstrated superior performance, achieving a mean Average Precision (mAP) of 0.907 on the training set and 0.904 on the test set.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Weed management remains a critical challenge in agriculture, where weeds compete with crops for essential resources, leading to significant yield losses. Accurate detection of weeds at various growth stages is crucial for effective management yet challenging for farmers, as it requires identifying different species at multiple growth phases. This research addresses these challenges by utilizing advanced object detection models, specifically, the Detection Transformer (DETR) with a ResNet50 backbone and RetinaNet with a ResNeXt101 backbone, to identify and classify 16 weed species of economic concern across 174 classes, spanning their 11 weeks growth stages from seedling to maturity. A robust dataset comprising 203,567 images was developed, meticulously labeled by species and growth stage. The models were rigorously trained and evaluated, with RetinaNet demonstrating superior performance, achieving a mean Average Precision (mAP) of 0.907 on the training set and 0.904 on the test set, compared to DETR's mAP of 0.854 and 0.840, respectively. RetinaNet also outperformed DETR in recall and inference speed of 7.28 FPS, making it more suitable for real time applications. Both models showed improved accuracy as plants matured. This research provides crucial insights for developing precise, sustainable, and automated weed management strategies, paving the way for real time species specific detection systems and advancing AI-assisted agriculture through continued innovation in model development and early detection accuracy.
Related papers
- Multispectral Remote Sensing for Weed Detection in West Australian Agricultural Lands [3.6284577335311563]
The Kondinin region in Western Australia faces significant agricultural challenges due to pervasive weed infestations, causing economic losses and ecological impacts.<n>This study constructs a tailored multispectral remote sensing framework for weed detection to advance precision agriculture practices.<n>Unmanned aerial vehicles were used to collect raw multispectral data from two experimental areas over four years, covering 0.6046 km2 and ground truth annotations were created with GPS-enabled vehicles to manually label weeds and crops.
arXiv Detail & Related papers (2025-02-12T07:01:42Z) - CMAViT: Integrating Climate, Managment, and Remote Sensing Data for Crop Yield Estimation with Multimodel Vision Transformers [0.0]
We introduce a deep learning-based multi-model called Climate-Management Aware Vision Transformer (CMAViT)
CMAViT integrates both spatial and temporal data by leveraging remote sensing imagery and short-term meteorological data.
It outperforms traditional models like UNet-ConvLSTM, excelling in spatial variability capture and yield prediction.
arXiv Detail & Related papers (2024-11-25T23:34:53Z) - Cannabis Seed Variant Detection using Faster R-CNN [0.0]
This paper presents a study on cannabis seed variant detection by employing a state-of-the-art object detection model Faster R-CNN.
We implement the model on a locally sourced cannabis seed dataset in Thailand, comprising 17 distinct classes.
We evaluate six Faster R-CNN models by comparing performance on various metrics and achieving a mAP score of 94.08% and an F1 score of 95.66%.
arXiv Detail & Related papers (2024-03-15T22:49:47Z) - HarvestNet: A Dataset for Detecting Smallholder Farming Activity Using
Harvest Piles and Remote Sensing [50.4506590177605]
HarvestNet is a dataset for mapping the presence of farms in the Ethiopian regions of Tigray and Amhara during 2020-2023.
We introduce a new approach based on the detection of harvest piles characteristic of many smallholder systems.
We conclude that remote sensing of harvest piles can contribute to more timely and accurate cropland assessments in food insecure regions.
arXiv Detail & Related papers (2023-08-23T11:03:28Z) - CWD30: A Comprehensive and Holistic Dataset for Crop Weed Recognition in
Precision Agriculture [1.64709990449384]
We present the CWD30 dataset, a large-scale, diverse, holistic, and hierarchical dataset tailored for crop-weed recognition tasks in precision agriculture.
CWD30 comprises over 219,770 high-resolution images of 20 weed species and 10 crop species, encompassing various growth stages, multiple viewing angles, and environmental conditions.
The dataset's hierarchical taxonomy enables fine-grained classification and facilitates the development of more accurate, robust, and generalizable deep learning models.
arXiv Detail & Related papers (2023-05-17T09:39:01Z) - Agave crop segmentation and maturity classification with deep learning
data-centric strategies using very high-resolution satellite imagery [101.18253437732933]
We present an Agave tequilana Weber azul crop segmentation and maturity classification using very high resolution satellite imagery.
We solve real-world deep learning problems in the very specific context of agave crop segmentation.
With the resulting accurate models, agave production forecasting can be made available for large regions.
arXiv Detail & Related papers (2023-03-21T03:15:29Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - Performance Evaluation of Deep Transfer Learning on Multiclass
Identification of Common Weed Species in Cotton Production Systems [3.427330019009861]
This paper makes a first comprehensive evaluation of deep transfer learning (DTL) for identifying weeds specific to cotton production systems in southern United States.
A new dataset for weed identification was created, consisting of 5187 color images of 15 weed classes collected under natural lighting conditions and at varied weed growth stages.
DTL achieved high classification accuracy of F1 scores exceeding 95%, requiring reasonably short training time (less than 2.5 hours) across models.
arXiv Detail & Related papers (2021-10-11T01:51:48Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows
from UAV Imagery [56.10033255997329]
We propose a novel deep learning method based on a Convolutional Neural Network (CNN)
It simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations.
The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops.
arXiv Detail & Related papers (2020-12-31T18:51:17Z) - One-Shot Learning with Triplet Loss for Vegetation Classification Tasks [45.82374977939355]
Triplet loss function is one of the options that can significantly improve the accuracy of the One-shot Learning tasks.
Starting from 2015, many projects use Siamese networks and this kind of loss for face recognition and object classification.
arXiv Detail & Related papers (2020-12-14T10:44:22Z) - Learning from Data to Optimize Control in Precision Farming [77.34726150561087]
Special issue presents the latest development in statistical inference, machine learning and optimum control for precision farming.
Satellite positioning and navigation followed by Internet-of-Things generate vast information that can be used to optimize farming processes in real-time.
arXiv Detail & Related papers (2020-07-07T12:44:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.