A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows
from UAV Imagery
- URL: http://arxiv.org/abs/2012.15827v3
- Date: Sun, 14 Feb 2021 18:02:01 GMT
- Title: A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows
from UAV Imagery
- Authors: Lucas Prado Osco, Mauro dos Santos de Arruda, Diogo Nunes
Gon\c{c}alves, Alexandre Dias, Juliana Batistoti, Mauricio de Souza, Felipe
David Georges Gomes, Ana Paula Marques Ramos, L\'ucio Andr\'e de Castro
Jorge, Veraldo Liesenberg, Jonathan Li, Lingfei Ma, Jos\'e Marcato Junior,
Wesley Nunes Gon\c{c}alves
- Abstract summary: We propose a novel deep learning method based on a Convolutional Neural Network (CNN)
It simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations.
The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops.
- Score: 56.10033255997329
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we propose a novel deep learning method based on a
Convolutional Neural Network (CNN) that simultaneously detects and geolocates
plantation-rows while counting its plants considering highly-dense plantation
configurations. The experimental setup was evaluated in a cornfield with
different growth stages and in a Citrus orchard. Both datasets characterize
different plant density scenarios, locations, types of crops, sensors, and
dates. A two-branch architecture was implemented in our CNN method, where the
information obtained within the plantation-row is updated into the plant
detection branch and retro-feed to the row branch; which are then refined by a
Multi-Stage Refinement method. In the corn plantation datasets (with both
growth phases, young and mature), our approach returned a mean absolute error
(MAE) of 6.224 plants per image patch, a mean relative error (MRE) of 0.1038,
precision and recall values of 0.856, and 0.905, respectively, and an F-measure
equal to 0.876. These results were superior to the results from other deep
networks (HRNet, Faster R-CNN, and RetinaNet) evaluated with the same task and
dataset. For the plantation-row detection, our approach returned precision,
recall, and F-measure scores of 0.913, 0.941, and 0.925, respectively. To test
the robustness of our model with a different type of agriculture, we performed
the same task in the citrus orchard dataset. It returned an MAE equal to 1.409
citrus-trees per patch, MRE of 0.0615, precision of 0.922, recall of 0.911, and
F-measure of 0.965. For citrus plantation-row detection, our approach resulted
in precision, recall, and F-measure scores equal to 0.965, 0.970, and 0.964,
respectively. The proposed method achieved state-of-the-art performance for
counting and geolocating plants and plant-rows in UAV images from different
types of crops.
Related papers
- Local Manifold Learning for No-Reference Image Quality Assessment [68.9577503732292]
We propose an innovative framework that integrates local manifold learning with contrastive learning for No-Reference Image Quality Assessment (NR-IQA)
Our approach demonstrates a better performance compared to state-of-the-art methods in 7 standard datasets.
arXiv Detail & Related papers (2024-06-27T15:14:23Z) - Boosting Crop Classification by Hierarchically Fusing Satellite,
Rotational, and Contextual Data [0.0]
We propose a novel approach to fuse multimodal information into a model for improved accuracy and robustness across multiple years and countries.
To evaluate our approach, we release a new annotated dataset of 7.4 million agricultural parcels in France and Netherlands.
arXiv Detail & Related papers (2023-05-19T21:42:53Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - Transferring learned patterns from ground-based field imagery to predict
UAV-based imagery for crop and weed semantic segmentation in precision crop
farming [3.95486899327898]
We have developed a deep convolutional network that enables to predict both field and aerial images from UAVs for weed segmentation.
The network learning process is visualized by feature maps at shallow and deep layers.
The study shows that the developed deep convolutional neural network could be used to classify weeds from both field and aerial images.
arXiv Detail & Related papers (2022-10-20T19:25:06Z) - Comparing Machine Learning Techniques for Alfalfa Biomass Yield
Prediction [0.8808021343665321]
alfalfa crop is globally important as livestock feed, so highly efficient planting and harvesting could benefit many industries.
Recent work using machine learning to predict yields for alfalfa and other crops has shown promise.
Previous efforts used remote sensing, weather, planting, and soil data to train machine learning models for yield prediction.
arXiv Detail & Related papers (2022-10-20T13:00:33Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - Automatic Plant Cover Estimation with CNNs Automatic Plant Cover
Estimation with Convolutional Neural Networks [8.361945776819528]
We investigate approaches using convolutional neural networks (CNNs) to automatically extract the relevant data from images.
We find that we outperform our previous approach at higher image resolutions using a custom CNN with a mean absolute error of 5.16%.
In addition to these investigations, we also conduct an error analysis based on the temporal aspect of the plant cover images.
arXiv Detail & Related papers (2021-06-21T14:52:01Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - A Deep Learning Approach Based on Graphs to Detect Plantation Lines [16.76043873454695]
We propose a deep learning approach based on graphs to detect plantation lines in UAV-based RGB imagery.
The proposed method was compared against state-of-the-art deep learning methods.
It achieved superior performance with a significant margin, returning precision, recall, and F1-score of 98.7%, 91.9%, and 95.1%, respectively.
arXiv Detail & Related papers (2021-02-05T14:56:42Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.