Agricultural Plant Cataloging and Establishment of a Data Framework from
UAV-based Crop Images by Computer Vision
- URL: http://arxiv.org/abs/2201.02885v2
- Date: Tue, 11 Jan 2022 11:49:09 GMT
- Title: Agricultural Plant Cataloging and Establishment of a Data Framework from
UAV-based Crop Images by Computer Vision
- Authors: Maurice G\"under, Facundo R. Ispizua Yamati, Jana Kierdorf, Ribana
Roscher, Anne-Katrin Mahlein, Christian Bauckhage
- Abstract summary: We present a hands-on workflow for the automatized temporal and spatial identification and individualization of crop images from UAVs.
The presented approach improves analysis and interpretation of UAV data in agriculture significantly.
- Score: 4.0382342610484425
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: UAV-based image retrieval in modern agriculture enables gathering large
amounts of spatially referenced crop image data. In large-scale experiments,
however, UAV images suffer from containing a multitudinous amount of crops in a
complex canopy architecture. Especially for the observation of temporal
effects, this complicates the recognition of individual plants over several
images and the extraction of relevant information tremendously. In this work,
we present a hands-on workflow for the automatized temporal and spatial
identification and individualization of crop images from UAVs abbreviated as
"cataloging" based on comprehensible computer vision methods. We evaluate the
workflow on two real-world datasets. One dataset is recorded for observation of
Cercospora leaf spot - a fungal disease - in sugar beet over an entire growing
cycle. The other one deals with harvest prediction of cauliflower plants. The
plant catalog is utilized for the extraction of single plant images seen over
multiple time points. This gathers large-scale spatio-temporal image dataset
that in turn can be applied to train further machine learning models including
various data layers. The presented approach improves analysis and
interpretation of UAV data in agriculture significantly. By validation with
some reference data, our method shows an accuracy that is similar to more
complex deep learning-based recognition techniques. Our workflow is able to
automatize plant cataloging and training image extraction, especially for large
datasets.
Related papers
- Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Improving Data Efficiency for Plant Cover Prediction with Label
Interpolation and Monte-Carlo Cropping [7.993547048820065]
The plant community composition is an essential indicator of environmental changes and is usually analyzed in ecological field studies.
We introduce an approach to interpolate the sparse labels in the collected vegetation plot time series down to the intermediate dense and unlabeled images.
We also introduce a new method we call Monte-Carlo Cropping to deal with high-resolution images efficiently.
arXiv Detail & Related papers (2023-07-17T15:17:39Z) - Inside Out: Transforming Images of Lab-Grown Plants for Machine Learning
Applications in Agriculture [0.0]
We employ a contrastive unpaired translation (CUT) generative adversarial network (GAN) to translate indoor plant images to appear as field images.
While we train our network to translate an image containing only a single plant, we show that our method is easily extendable to produce multiple-plant field images.
We also use our synthetic multi-plant images to train several YoloV5 nano object detection models to perform the task of plant detection.
arXiv Detail & Related papers (2022-11-05T20:51:45Z) - Semantic Image Segmentation with Deep Learning for Vine Leaf Phenotyping [59.0626764544669]
In this study, we use Deep Learning methods to semantically segment grapevine leaves images in order to develop an automated object detection system for leaf phenotyping.
Our work contributes to plant lifecycle monitoring through which dynamic traits such as growth and development can be captured and quantified.
arXiv Detail & Related papers (2022-10-24T14:37:09Z) - Semantic Segmentation of Vegetation in Remote Sensing Imagery Using Deep
Learning [77.34726150561087]
We propose an approach for creating a multi-modal and large-temporal dataset comprised of publicly available Remote Sensing data.
We use Convolutional Neural Networks (CNN) models that are capable of separating different classes of vegetation.
arXiv Detail & Related papers (2022-09-28T18:51:59Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - High-Resolution UAV Image Generation for Sorghum Panicle Detection [23.88932181375298]
We present an approach that uses synthetic training images from generative adversarial networks (GANs) for data augmentation to enhance the performance of Sorghum panicle detection and counting.
Our method can generate synthetic high-resolution UAV RGB images with panicle labels by using image-to-image translation GANs with a limited ground truth dataset of real UAV RGB images.
arXiv Detail & Related papers (2022-05-08T20:26:56Z) - Enlisting 3D Crop Models and GANs for More Data Efficient and
Generalizable Fruit Detection [0.0]
We propose a method that generates agricultural images from a synthetic 3D crop model domain into real world crop domains.
The method uses a semantically constrained GAN (generative adversarial network) to preserve the fruit position and geometry.
Incremental training experiments in vineyard grape detection tasks show that the images generated from our method can significantly speed the domain process.
arXiv Detail & Related papers (2021-08-30T16:11:59Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Image Augmentation for Multitask Few-Shot Learning: Agricultural Domain
Use-Case [0.0]
This paper challenges small and imbalanced datasets based on the example of a plant phenomics domain.
We introduce an image augmentation framework, which enables us to extremely enlarge the number of training samples.
We prove that our augmentation method increases model performance when only a few training samples are available.
arXiv Detail & Related papers (2021-02-24T14:08:34Z) - Agriculture-Vision: A Large Aerial Image Database for Agricultural
Pattern Analysis [110.30849704592592]
We present Agriculture-Vision: a large-scale aerial farmland image dataset for semantic segmentation of agricultural patterns.
Each image consists of RGB and Near-infrared (NIR) channels with resolution as high as 10 cm per pixel.
We annotate nine types of field anomaly patterns that are most important to farmers.
arXiv Detail & Related papers (2020-01-05T20:19:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.