Estimating the Diameter at Breast Height of Trees in a Forest With a Single 360 Camera
- URL: http://arxiv.org/abs/2505.03093v2
- Date: Thu, 15 May 2025 14:24:44 GMT
- Title: Estimating the Diameter at Breast Height of Trees in a Forest With a Single 360 Camera
- Authors: Siming He, Zachary Osman, Fernando Cladera, Dexter Ong, Nitant Rai, Patrick Corey Green, Vijay Kumar, Pratik Chaudhari,
- Abstract summary: Forest inventories rely on accurate measurements of the diameter at breast height (DBH) for ecological monitoring, resource management, and carbon accounting.<n>While LiDAR-based techniques can achieve centimeter-level precision, they are cost-prohibitive and operationally complex.<n>We present a low-cost alternative that only needs a consumer-grade 360 video camera.
- Score: 52.85399274741336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Forest inventories rely on accurate measurements of the diameter at breast height (DBH) for ecological monitoring, resource management, and carbon accounting. While LiDAR-based techniques can achieve centimeter-level precision, they are cost-prohibitive and operationally complex. We present a low-cost alternative that only needs a consumer-grade 360 video camera. Our semi-automated pipeline comprises of (i) a dense point cloud reconstruction using Structure from Motion (SfM) photogrammetry software called Agisoft Metashape, (ii) semantic trunk segmentation by projecting Grounded Segment Anything (SAM) masks onto the 3D cloud, and (iii) a robust RANSAC-based technique to estimate cross section shape and DBH. We introduce an interactive visualization tool for inspecting segmented trees and their estimated DBH. On 61 acquisitions of 43 trees under a variety of conditions, our method attains median absolute relative errors of 5-9% with respect to "ground-truth" manual measurements. This is only 2-4% higher than LiDAR-based estimates, while employing a single 360 camera that costs orders of magnitude less, requires minimal setup, and is widely available.
Related papers
- Zero-shot Inexact CAD Model Alignment from a Single Image [53.37898107159792]
A practical approach to infer 3D scene structure from a single image is to retrieve a closely matching 3D model from a database and align it with the object in the image.<n>Existing methods rely on supervised training with images and pose annotations, which limits them to a narrow set of object categories.<n>We propose a weakly supervised 9-DoF alignment method for inexact 3D models that requires no pose annotations and generalizes to unseen categories.
arXiv Detail & Related papers (2025-07-04T04:46:59Z) - A Unified Graph-based Framework for Scalable 3D Tree Reconstruction and Non-Destructive Biomass Estimation from Point Clouds [8.821870725779071]
Estimating forest above-ground biomass (AGB) is crucial for assessing carbon storage and supporting sustainable forest management.<n> Quantitative Structural Model (QSM) offers a non-destructive approach to AGB estimation through 3D tree structural reconstruction.<n>This study presents a novel unified framework that enables end-to-end processing of large-scale point clouds.
arXiv Detail & Related papers (2025-06-18T15:55:47Z) - Bringing SAM to new heights: Leveraging elevation data for tree crown segmentation from drone imagery [68.69685477556682]
Current monitoring methods involve ground measurements, requiring extensive cost, time and labor.<n>Drone remote sensing and computer vision offer great potential for mapping individual trees from aerial imagery at broad-scale.<n>We compare methods leveraging Segment Anything Model (SAM) for the task of automatic tree crown instance segmentation in high resolution drone imagery.<n>We also study the integration of elevation data into models, in the form of Digital Surface Model (DSM) information, which can readily be obtained at no additional cost from RGB drone imagery.
arXiv Detail & Related papers (2025-06-05T12:43:11Z) - Assessing SAM for Tree Crown Instance Segmentation from Drone Imagery [68.69685477556682]
Current monitoring methods involve measuring trees by hand for each species, requiring extensive cost, time, and labour.<n>Advances in drone remote sensing and computer vision offer great potential for mapping and characterizing trees from aerial imagery.<n>We compare SAM methods for the task of automatic tree crown instance segmentation in high resolution drone imagery of young tree plantations.<n>We find that methods using SAM out-of-the-box do not outperform a custom Mask R-CNN, even with well-designed prompts, but that there is potential for methods which tune SAM further.
arXiv Detail & Related papers (2025-03-26T03:45:36Z) - DeepForest: Sensing Into Self-Occluding Volumes of Vegetation With Aerial Imaging [8.093958936744807]
Long-standing limitation of remote sensing to penetrate deep into dense canopy layers.<n> LiDAR and radar are currently considered the primary options for measuring 3D vegetation structures.<n>Our approach allows sensing deep into self-occluding vegetation volumes, such as forests.
arXiv Detail & Related papers (2025-02-04T09:45:49Z) - Towards autonomous photogrammetric forest inventory using a lightweight under-canopy robotic drone [1.0964031083527972]
This article builds a prototype of a robotic under-canopy drone utilizing state-of-the-art open-source methods and validating its performance for data collection inside forests.<n>The tree parameter estimation capability was studied by conducting diameter diameter (DBH) estimation using onboard stereo camera data and photogrammetric methods.<n>The experiments showed excellent performance in forest reconstruction with a stereoscopic photogrammetric system.
arXiv Detail & Related papers (2025-01-21T11:59:07Z) - Tomographic SAR Reconstruction for Forest Height Estimation [4.1942958779358674]
Tree height estimation serves as an important proxy for biomass estimation in ecological and forestry applications.<n>In this study, we use deep learning to estimate forest canopy height directly from 2D Single Look Complex (SLC) images, a derivative of Synthetic Aperture Radar (SAR)<n>Our method attempts to bypass traditional tomographic signal processing, potentially reducing latency from SAR capture to end product.
arXiv Detail & Related papers (2024-12-01T17:37:25Z) - 3D-SAR Tomography and Machine Learning for High-Resolution Tree Height Estimation [4.1942958779358674]
Tree height, a key factor in biomass calculations, can be measured using Synthetic Aperture Radar (SAR) technology.
This study applies machine learning to extract forest height data from two SAR products.
We use the TomoSense dataset, containing SAR and LiDAR data from Germany's Eifel National Park, to develop and evaluate height estimation models.
arXiv Detail & Related papers (2024-09-09T14:07:38Z) - SDGE: Stereo Guided Depth Estimation for 360$^\circ$ Camera Sets [65.64958606221069]
Multi-camera systems are often used in autonomous driving to achieve a 360$circ$ perception.
These 360$circ$ camera sets often have limited or low-quality overlap regions, making multi-view stereo methods infeasible for the entire image.
We propose the Stereo Guided Depth Estimation (SGDE) method, which enhances depth estimation of the full image by explicitly utilizing multi-view stereo results on the overlap.
arXiv Detail & Related papers (2024-02-19T02:41:37Z) - Very high resolution canopy height maps from RGB imagery using
self-supervised vision transformer and convolutional decoder trained on
Aerial Lidar [14.07306593230776]
This paper presents the first high-resolution canopy height map concurrently produced for multiple sub-national jurisdictions.
The maps are generated by the extraction of features from a self-supervised model trained on Maxar imagery from 2017 to 2020.
We also introduce a post-processing step using a convolutional network trained on GEDI observations.
arXiv Detail & Related papers (2023-04-14T15:52:57Z) - Collaboration Helps Camera Overtake LiDAR in 3D Detection [49.58433319402405]
Camera-only 3D detection provides a simple solution for localizing objects in 3D space compared to LiDAR-based detection systems.
Our proposed collaborative camera-only 3D detection (CoCa3D) enables agents to share complementary information with each other through communication.
Results show that CoCa3D improves previous SOTA performances by 44.21% on DAIR-V2X, 30.60% on OPV2V+, 12.59% on CoPerception-UAVs+ for AP@70.
arXiv Detail & Related papers (2023-03-23T03:50:41Z) - LiDAR guided Small obstacle Segmentation [14.880698940693609]
Small obstacles on the road are critical for autonomous driving.
We present a method to reliably detect such obstacles through a multi-modal framework of sparse LiDAR and Monocular vision.
We show significant performance gains when the context is fed as an additional input to monocular semantic segmentation frameworks.
arXiv Detail & Related papers (2020-03-12T18:34:46Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.