Development of Automatic Tree Counting Software from UAV Based Aerial
Images With Machine Learning
- URL: http://arxiv.org/abs/2201.02698v1
- Date: Fri, 7 Jan 2022 22:32:08 GMT
- Title: Development of Automatic Tree Counting Software from UAV Based Aerial
Images With Machine Learning
- Authors: Musa Ata\c{s}, Ayhan Talay
- Abstract summary: This study aims to automatically count trees in designated areas on the Siirt University campus from high-resolution images obtained by UAV.
Images obtained at 30 meters height with 20% overlap were stitched offline at the ground station using Adobe Photoshop's photo merge tool.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unmanned aerial vehicles (UAV) are used successfully in many application
areas such as military, security, monitoring, emergency aid, tourism,
agriculture, and forestry. This study aims to automatically count trees in
designated areas on the Siirt University campus from high-resolution images
obtained by UAV. Images obtained at 30 meters height with 20% overlap were
stitched offline at the ground station using Adobe Photoshop's photo merge
tool. The resulting image was denoised and smoothed by applying the 3x3 median
and mean filter, respectively. After generating the orthophoto map of the
aerial images captured by the UAV in certain regions, the bounding boxes of
different objects on these maps were labeled in the modalities of HSV (Hue
Saturation Value), RGB (Red Green Blue) and Gray. Training, validation, and
test datasets were generated and then have been evaluated for classification
success rates related to tree detection using various machine learning
algorithms. In the last step, a ground truth model was established by obtaining
the actual tree numbers, and then the prediction performance was calculated by
comparing the reference ground truth data with the proposed model. It is
considered that significant success has been achieved for tree count with an
average accuracy rate of 87% obtained using the MLP classifier in predetermined
regions.
Related papers
- OAM-TCD: A globally diverse dataset of high-resolution tree cover maps [8.336960607169175]
We present a novel open-access dataset for individual tree crown delineation (TCD) in high-resolution aerial imagery sourced from OpenMap (OAM)
Our dataset, OAM-TCD, comprises 5072 2048x2048px images at 10 cm/px resolution with associated human-labeled instance masks for over 280k individual and 56k groups of trees.
Using our dataset, we train reference instance and semantic segmentation models that compare favorably to existing state-of-the-art models.
arXiv Detail & Related papers (2024-07-16T14:11:29Z) - A Comprehensive Review on Tree Detection Methods Using Point Cloud and
Aerial Imagery from Unmanned Aerial Vehicles [4.362788465317224]
This paper focuses on tree detection methods applied to UAV data collected by UAVs.
For the detection methods using images directly, this paper reviews these methods by whether or not to use the Deep Learning (DL) method.
This review could help researchers who want to carry out tree detection on specific forests and for farmers to use UAVs in managing agriculture production.
arXiv Detail & Related papers (2023-09-28T12:22:39Z) - Unleash the Potential of Image Branch for Cross-modal 3D Object
Detection [67.94357336206136]
We present a new cross-modal 3D object detector, namely UPIDet, which aims to unleash the potential of the image branch from two aspects.
First, UPIDet introduces a new 2D auxiliary task called normalized local coordinate map estimation.
Second, we discover that the representational capability of the point cloud backbone can be enhanced through the gradients backpropagated from the training objectives of the image branch.
arXiv Detail & Related papers (2023-01-22T08:26:58Z) - Classification of Single Tree Decay Stages from Combined Airborne LiDAR
Data and CIR Imagery [1.4589991363650008]
This study, for the first time, automatically categorizing individual trees (Norway spruce) into five decay stages.
Three different Machine Learning methods - 3D point cloud-based deep learning (KPConv), Convolutional Neural Network (CNN), and Random Forest (RF)
All models achieved promising results, reaching overall accuracy (OA) of up to 88.8%, 88.4% and 85.9% for KPConv, CNN and RF, respectively.
arXiv Detail & Related papers (2023-01-04T22:20:16Z) - Transferring learned patterns from ground-based field imagery to predict
UAV-based imagery for crop and weed semantic segmentation in precision crop
farming [3.95486899327898]
We have developed a deep convolutional network that enables to predict both field and aerial images from UAVs for weed segmentation.
The network learning process is visualized by feature maps at shallow and deep layers.
The study shows that the developed deep convolutional neural network could be used to classify weeds from both field and aerial images.
arXiv Detail & Related papers (2022-10-20T19:25:06Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - Satellite Image Based Cross-view Localization for Autonomous Vehicle [59.72040418584396]
This paper shows that by using an off-the-shelf high-definition satellite image as a ready-to-use map, we are able to achieve cross-view vehicle localization up to a satisfactory accuracy.
Our method is validated on KITTI and Ford Multi-AV Seasonal datasets as ground view and Google Maps as the satellite view.
arXiv Detail & Related papers (2022-07-27T13:16:39Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z) - Learning CNN filters from user-drawn image markers for coconut-tree
image classification [78.42152902652215]
We present a method that needs a minimal set of user-selected images to train the CNN's feature extractor.
The method learns the filters of each convolutional layer from user-drawn markers in image regions that discriminate classes.
It does not rely on optimization based on backpropagation, and we demonstrate its advantages on the binary classification of coconut-tree aerial images.
arXiv Detail & Related papers (2020-08-08T15:50:23Z) - Deep-Learning-based Automated Palm Tree Counting and Geolocation in
Large Farms from Aerial Geotagged Images [1.8782750537161614]
We propose a framework for the automated counting and geolocation of palm trees from aerial images using convolutional neural networks.
For this purpose, we collected aerial images in a palm tree Farm in the Kharj region, in Riyadh Saudi Arabia, using DJI drones.
arXiv Detail & Related papers (2020-05-11T17:11:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.