Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection
- URL: http://arxiv.org/abs/2106.07770v2
- Date: Wed, 16 Jun 2021 15:49:52 GMT
- Title: Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection
- Authors: Sujata Butte, Aleksandar Vakanski, Kasia Duellman, Haotian Wang, Amin
Mirkouei
- Abstract summary: The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
- Score: 60.83360138070649
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research on the application of remote sensing and deep learning-based
analysis in precision agriculture demonstrated a potential for improved crop
management and reduced environmental impacts of agricultural production.
Despite the promising results, the practical relevance of these technologies
for actual field deployment requires novel algorithms that are customized for
analysis of agricultural images and robust to implementation on natural field
imagery. The paper presents an approach for analyzing aerial images of a potato
crop using deep neural networks. The main objective is to demonstrate automated
spatial recognition of a healthy versus stressed crop at a plant level.
Specifically, we examine premature plant senescence resulting in drought stress
on Russet Burbank potato plants. The proposed deep learning model, named
Retina-UNet-Ag, is a variant of Retina-UNet (Jaeger et al., 2018) and includes
connections from low-level semantic dense representation maps to the feature
pyramid network. The paper also introduces a dataset of field images acquired
with a Parrot Sequoia camera carried by a Solo unmanned aerial vehicle.
Experimental validation demonstrated the ability for distinguishing healthy and
stressed plants in field images, achieving an average Dice score coefficient of
0.74. A comparison to related state-of-the-art deep learning models for object
detection revealed that the presented approach is effective for the task at
hand. The method applied here is conducive toward the assessment and
recognition of potato crop stress (early plant senescence resulting from
drought stress in this case) in natural aerial field images collected under
real conditions.
Related papers
- Explainable Light-Weight Deep Learning Pipeline for Improved Drought Stress Identification [0.0]
Early identification of drought stress in crops is vital for implementing effective mitigation measures and reducing yield loss.
Our work proposes a novel deep learning framework for classifying drought stress in potato crops captured by UAVs in natural settings.
A key innovation of our work involves the integration of Gradient-Class Activation Mapping (Grad-CAM), an explainability technique.
arXiv Detail & Related papers (2024-04-15T18:26:03Z) - BonnBeetClouds3D: A Dataset Towards Point Cloud-based Organ-level
Phenotyping of Sugar Beet Plants under Field Conditions [30.27773980916216]
Agricultural production is facing severe challenges in the next decades induced by climate change and the need for sustainability.
Advancements in field management through non-chemical weeding by robots in combination with monitoring of crops by autonomous unmanned aerial vehicles (UAVs) are helpful to address these challenges.
The analysis of plant traits, called phenotyping, is an essential activity in plant breeding, it however involves a great amount of manual labor.
arXiv Detail & Related papers (2023-12-22T14:06:44Z) - Perceptual Artifacts Localization for Image Synthesis Tasks [59.638307505334076]
We introduce a novel dataset comprising 10,168 generated images, each annotated with per-pixel perceptual artifact labels.
A segmentation model, trained on our proposed dataset, effectively localizes artifacts across a range of tasks.
We propose an innovative zoom-in inpainting pipeline that seamlessly rectifies perceptual artifacts in the generated images.
arXiv Detail & Related papers (2023-10-09T10:22:08Z) - Semantic Image Segmentation with Deep Learning for Vine Leaf Phenotyping [59.0626764544669]
In this study, we use Deep Learning methods to semantically segment grapevine leaves images in order to develop an automated object detection system for leaf phenotyping.
Our work contributes to plant lifecycle monitoring through which dynamic traits such as growth and development can be captured and quantified.
arXiv Detail & Related papers (2022-10-24T14:37:09Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - Estimaci\'on de \'areas de cultivo mediante Deep Learning y
programaci\'on convencional [0.0]
We have considered as a case study one of the most recognized companies in the planting and harvesting of sugar cane in Ecuador.
The strategy combines a Generative Adversarial Neural Network (GAN) that is trained on a dataset of aerial photographs of sugar cane plots to distinguish populated or unpopulated crop areas.
The experiments performed demonstrate a significant improvement in the quality of the aerial photographs.
arXiv Detail & Related papers (2022-07-25T16:22:55Z) - Estimating Crop Primary Productivity with Sentinel-2 and Landsat 8 using
Machine Learning Methods Trained with Radiative Transfer Simulations [58.17039841385472]
We take advantage of all parallel developments in mechanistic modeling and satellite data availability for advanced monitoring of crop productivity.
Our model successfully estimates gross primary productivity across a variety of C3 crop types and environmental conditions even though it does not use any local information from the corresponding sites.
This highlights its potential to map crop productivity from new satellite sensors at a global scale with the help of current Earth observation cloud computing platforms.
arXiv Detail & Related papers (2020-12-07T16:23:13Z) - UAV and Machine Learning Based Refinement of a Satellite-Driven
Vegetation Index for Precision Agriculture [0.8399688944263843]
This paper presents a novel satellite imagery refinement framework based on a deep learning technique.
It exploits information properly derived from high resolution images acquired by unmanned aerial vehicle (UAV) airborne multispectral sensors.
A vineyard in Serralunga d'Alba (Northern Italy) was chosen as a case study for validation purposes.
arXiv Detail & Related papers (2020-04-29T18:34:48Z) - Deep Transfer Learning For Plant Center Localization [19.322420819302263]
This paper investigates methods that estimate plant locations for a field-based crop using RGB aerial images captured using Unmanned Aerial Vehicles (UAVs)
Deep learning approaches provide promising capability for locating plants observed in RGB images, but they require large quantities of labeled data (ground truth) for training.
We propose a method for estimating plant centers by transferring an existing model to a new scenario using limited ground truth data.
arXiv Detail & Related papers (2020-04-29T06:29:49Z) - Agriculture-Vision: A Large Aerial Image Database for Agricultural
Pattern Analysis [110.30849704592592]
We present Agriculture-Vision: a large-scale aerial farmland image dataset for semantic segmentation of agricultural patterns.
Each image consists of RGB and Near-infrared (NIR) channels with resolution as high as 10 cm per pixel.
We annotate nine types of field anomaly patterns that are most important to farmers.
arXiv Detail & Related papers (2020-01-05T20:19:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.