Detection of Degraded Acacia tree species using deep neural networks on
uav drone imagery
- URL: http://arxiv.org/abs/2204.07096v1
- Date: Thu, 14 Apr 2022 16:37:26 GMT
- Title: Detection of Degraded Acacia tree species using deep neural networks on
uav drone imagery
- Authors: Anne Achieng Osio, Ho\`ang-\^An L\^e, Samson Ayugi, Fred Onyango,
Peter Odwe, S\'ebastien Lef\`evre
- Abstract summary: Unmanned Aerial Vehicles (UAVs) with embedded RGB cameras were used to capture fallen Acacia Xanthophloea trees.
Deep neural networks were used for fallen tree detection.
Retina-Net model achieved 38.9% precision and 57.9% recall.
- Score: 2.3837581572935505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep-learning-based image classification and object detection has been
applied successfully to tree monitoring. However, studies of tree crowns and
fallen trees, especially on flood inundated areas, remain largely unexplored.
Detection of degraded tree trunks on natural environments such as water,
mudflats, and natural vegetated areas is challenging due to the mixed colour
image backgrounds. In this paper, Unmanned Aerial Vehicles (UAVs), or drones,
with embedded RGB cameras were used to capture the fallen Acacia Xanthophloea
trees from six designated plots around Lake Nakuru, Kenya. Motivated by the
need to detect fallen trees around the lake, two well-established deep neural
networks, i.e. Faster Region-based Convolution Neural Network (Faster R-CNN)
and Retina-Net were used for fallen tree detection. A total of 7,590
annotations of three classes on 256 x 256 image patches were used for this
study. Experimental results show the relevance of deep learning in this
context, with Retina-Net model achieving 38.9% precision and 57.9% recall.
Related papers
- PlantCamo: Plant Camouflage Detection [60.685139083469956]
This paper introduces a new challenging problem of Plant Camouflage Detection (PCD)
To address this problem, we introduce the PlantCamo dataset, which comprises 1,250 images with camouflaged plants.
We conduct a large-scale benchmark study using 20+ cutting-edge COD models on the proposed dataset.
Our PCNet surpasses performance thanks to its multi-scale global feature enhancement and refinement.
arXiv Detail & Related papers (2024-10-23T06:51:59Z) - Comparative Analysis of Novel View Synthesis and Photogrammetry for 3D Forest Stand Reconstruction and extraction of individual tree parameters [2.153174198957389]
Photogrammetry is commonly used for reconstructing forest scenes but faces challenges like low efficiency and poor quality.
NeRF, while better for canopy regions, may produce errors in ground areas with limited views.
3DGS method generates sparser point clouds, particularly in trunk areas, affecting diameter at breast height (DBH) accuracy.
arXiv Detail & Related papers (2024-10-08T07:53:21Z) - Exploring Geometry of Blind Spots in Vision Models [56.47644447201878]
We study the phenomenon of under-sensitivity in vision models such as CNNs and Transformers.
We propose a Level Set Traversal algorithm that iteratively explores regions of high confidence with respect to the input space.
We estimate the extent of these connected higher-dimensional regions over which the model maintains a high degree of confidence.
arXiv Detail & Related papers (2023-10-30T18:00:33Z) - Classification of Single Tree Decay Stages from Combined Airborne LiDAR
Data and CIR Imagery [1.4589991363650008]
This study, for the first time, automatically categorizing individual trees (Norway spruce) into five decay stages.
Three different Machine Learning methods - 3D point cloud-based deep learning (KPConv), Convolutional Neural Network (CNN), and Random Forest (RF)
All models achieved promising results, reaching overall accuracy (OA) of up to 88.8%, 88.4% and 85.9% for KPConv, CNN and RF, respectively.
arXiv Detail & Related papers (2023-01-04T22:20:16Z) - Neuroevolution-based Classifiers for Deforestation Detection in Tropical
Forests [62.997667081978825]
Millions of hectares of tropical forests are lost every year due to deforestation or degradation.
Monitoring and deforestation detection programs are in use, in addition to public policies for the prevention and punishment of criminals.
This paper proposes the use of pattern classifiers based on neuroevolution technique (NEAT) in tropical forest deforestation detection tasks.
arXiv Detail & Related papers (2022-08-23T16:04:12Z) - Individual Tree Detection in Large-Scale Urban Environments using High-Resolution Multispectral Imagery [1.1661668662828382]
We introduce a novel deep learning method for detection of individual trees in urban environments.
We use a convolutional neural network to regress a confidence map indicating the locations of individual trees.
Our method provides complete spatial coverage by detecting trees in both public and private spaces.
arXiv Detail & Related papers (2022-08-22T21:26:57Z) - Development of Automatic Tree Counting Software from UAV Based Aerial
Images With Machine Learning [0.0]
This study aims to automatically count trees in designated areas on the Siirt University campus from high-resolution images obtained by UAV.
Images obtained at 30 meters height with 20% overlap were stitched offline at the ground station using Adobe Photoshop's photo merge tool.
arXiv Detail & Related papers (2022-01-07T22:32:08Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Coconut trees detection and segmentation in aerial imagery using mask
region-based convolution neural network [3.8902657229395907]
Deep learning approach is presented for the detection and segmentation of coconut tress in aerial imagery provided by the World Bank in collaboration with OpenMap and WeRobotics.
An overall 91% mean average precision for coconut trees detection was achieved.
arXiv Detail & Related papers (2021-05-10T13:42:19Z) - Probing Predictions on OOD Images via Nearest Categories [97.055916832257]
We study out-of-distribution (OOD) prediction behavior of neural networks when they classify images from unseen classes or corrupted images.
We introduce a new measure, nearest category generalization (NCG), where we compute the fraction of OOD inputs that are classified with the same label as their nearest neighbor in the training set.
We find that robust networks have consistently higher NCG accuracy than natural training, even when the OOD data is much farther away than the robustness radius.
arXiv Detail & Related papers (2020-11-17T07:42:27Z) - Learning CNN filters from user-drawn image markers for coconut-tree
image classification [78.42152902652215]
We present a method that needs a minimal set of user-selected images to train the CNN's feature extractor.
The method learns the filters of each convolutional layer from user-drawn markers in image regions that discriminate classes.
It does not rely on optimization based on backpropagation, and we demonstrate its advantages on the binary classification of coconut-tree aerial images.
arXiv Detail & Related papers (2020-08-08T15:50:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.