Towards autonomous photogrammetric forest inventory using a lightweight under-canopy robotic drone
- URL: http://arxiv.org/abs/2501.12073v1
- Date: Tue, 21 Jan 2025 11:59:07 GMT
- Title: Towards autonomous photogrammetric forest inventory using a lightweight under-canopy robotic drone
- Authors: Väinö Karjalainen, Niko Koivumäki, Teemu Hakala, Jesse Muhojoki, Eric Hyyppä, Anand George, Juha Suomalainen, Eija Honkavaara,
- Abstract summary: This article builds a prototype of a robotic under-canopy drone utilizing state-of-the-art open-source methods and validating its performance for data collection inside forests.
The tree parameter estimation capability was studied by conducting diameter diameter (DBH) estimation using onboard stereo camera data and photogrammetric methods.
The experiments showed excellent performance in forest reconstruction with a stereoscopic photogrammetric system.
- Score: 1.0964031083527972
- License:
- Abstract: Drones are increasingly used in forestry to capture high-resolution remote sensing data. While operations above the forest canopy are already highly automated, flying inside forests remains challenging, primarily relying on manual piloting. Inside dense forests, reliance on the Global Navigation Satellite System (GNSS) for localization is not feasible. Additionally, the drone must autonomously adjust its flight path to avoid collisions. Recently, advancements in robotics have enabled autonomous drone flights in GNSS-denied obstacle-rich areas. In this article, a step towards autonomous forest data collection is taken by building a prototype of a robotic under-canopy drone utilizing state-of-the-art open-source methods and validating its performance for data collection inside forests. The autonomous flight capability was evaluated through multiple test flights in two boreal forest test sites. The tree parameter estimation capability was studied by conducting diameter at breast height (DBH) estimation using onboard stereo camera data and photogrammetric methods. The prototype conducted flights in selected challenging forest environments, and the experiments showed excellent performance in forest reconstruction with a miniaturized stereoscopic photogrammetric system. The stem detection algorithm managed to identify 79.31 % of the stems. The DBH estimation had a root mean square error (RMSE) of 3.33 cm (12.79 %) and a bias of 1.01 cm (3.87 %) across all trees. For trees with a DBH less than 30 cm, the RMSE was 1.16 cm (5.74 %), and the bias was 0.13 cm (0.64 %). When considering the overall performance in terms of DBH accuracy, autonomy, and forest complexity, the proposed approach was superior compared to methods proposed in the scientific literature. Results provided valuable insights into autonomous forest reconstruction using drones, and several further development topics were proposed.
Related papers
- NeRF-Accelerated Ecological Monitoring in Mixed-Evergreen Redwood Forest [0.0]
We present a comparison of MLS and NeRF forest reconstructions for the purpose of trunk diameter estimation in a mixed-evergreen Redwood forest.
We propose an improved DBH-estimation method using convex-hull modeling.
arXiv Detail & Related papers (2024-10-09T20:32:15Z) - Drone Stereo Vision for Radiata Pine Branch Detection and Distance Measurement: Integrating SGBM and Segmentation Models [4.730379319834545]
This research proposes the development of a drone-based pruning system equipped with specialized pruning tools and a stereo vision camera.
Deep learning algorithms, including YOLO and Mask R-CNN, are employed to ensure accurate branch detection.
The synergy between these techniques facilitates the precise identification of branch locations and enables efficient, targeted pruning.
arXiv Detail & Related papers (2024-09-26T04:27:44Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - AZTR: Aerial Video Action Recognition with Auto Zoom and Temporal
Reasoning [63.628195002143734]
We propose a novel approach for aerial video action recognition.
Our method is designed for videos captured using UAVs and can run on edge or mobile devices.
We present a learning-based approach that uses customized auto zoom to automatically identify the human target and scale it appropriately.
arXiv Detail & Related papers (2023-03-02T21:24:19Z) - High-resolution canopy height map in the Landes forest (France) based on
GEDI, Sentinel-1, and Sentinel-2 data with a deep learning approach [0.044381279572631216]
We develop a deep learning model based on multi-stream remote sensing measurements to create a high-resolution canopy height map.
The model outputs allow us to generate a 10 m resolution canopy height map of the whole "Landes de Gascogne" forest area for 2020.
For all validation datasets in coniferous forests, our model showed better metrics than previous canopy height models available in the same region.
arXiv Detail & Related papers (2022-12-20T14:14:37Z) - TransVisDrone: Spatio-Temporal Transformer for Vision-based
Drone-to-Drone Detection in Aerial Videos [57.92385818430939]
Drone-to-drone detection using visual feed has crucial applications, such as detecting drone collisions, detecting drone attacks, or coordinating flight with other drones.
Existing methods are computationally costly, follow non-end-to-end optimization, and have complex multi-stage pipelines, making them less suitable for real-time deployment on edge devices.
We propose a simple yet effective framework, itTransVisDrone, that provides an end-to-end solution with higher computational efficiency.
arXiv Detail & Related papers (2022-10-16T03:05:13Z) - Aerial Monocular 3D Object Detection [67.20369963664314]
DVDET is proposed to achieve aerial monocular 3D object detection in both the 2D image space and the 3D physical space.
To address the severe view deformation issue, we propose a novel trainable geo-deformable transformation module.
To encourage more researchers to investigate this area, we will release the dataset and related code.
arXiv Detail & Related papers (2022-08-08T08:32:56Z) - Development of Automatic Tree Counting Software from UAV Based Aerial
Images With Machine Learning [0.0]
This study aims to automatically count trees in designated areas on the Siirt University campus from high-resolution images obtained by UAV.
Images obtained at 30 meters height with 20% overlap were stitched offline at the ground station using Adobe Photoshop's photo merge tool.
arXiv Detail & Related papers (2022-01-07T22:32:08Z) - Country-wide Retrieval of Forest Structure From Optical and SAR
Satellite Imagery With Bayesian Deep Learning [74.94436509364554]
We propose a Bayesian deep learning approach to densely estimate forest structure variables at country-scale with 10-meter resolution.
Our method jointly transforms Sentinel-2 optical images and Sentinel-1 synthetic aperture radar images into maps of five different forest structure variables.
We train and test our model on reference data from 41 airborne laser scanning missions across Norway.
arXiv Detail & Related papers (2021-11-25T16:21:28Z) - An Autonomous Drone for Search and Rescue in Forests using Airborne
Optical Sectioning [0.0]
We present a first prototype that finds people fully autonomously in densely occluded forests.
In the course of 17 field experiments conducted over various forest types, our drone found 38 out of 42 hidden persons.
Deep-learning-based person classification is unaffected by sparse and error-prone sampling within one-dimensional synthetic apertures.
arXiv Detail & Related papers (2021-05-10T13:05:22Z) - Agent with Warm Start and Active Termination for Plane Localization in
3D Ultrasound [56.14006424500334]
Standard plane localization is crucial for ultrasound (US) diagnosis.
In prenatal US, dozens of standard planes are manually acquired with a 2D probe.
We propose a novel reinforcement learning framework to automatically localize fetal brain standard planes in 3D US.
arXiv Detail & Related papers (2019-10-10T02:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.