Vegetation Stratum Occupancy Prediction from Airborne LiDAR 3D Point
Clouds
- URL: http://arxiv.org/abs/2112.13583v1
- Date: Mon, 27 Dec 2021 09:33:08 GMT
- Title: Vegetation Stratum Occupancy Prediction from Airborne LiDAR 3D Point
Clouds
- Authors: Ekaterina Kalinicheva, Loic Landrieu, Cl\'ement Mallet, Nesrine
Chehata
- Abstract summary: We propose a new deep learning-based method for estimating the occupancy of vegetation strata from 3D point clouds captured from an aerial platform.
Our network is supervized with values aggregated over cylindrical plots, which are easier to produce than pixel-wise or point-wise annotations.
Our method outperforms handcrafted and deep learning baselines in terms of precision while simultaneously providing visual and interpretable predictions.
- Score: 5.7047887413125276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new deep learning-based method for estimating the occupancy of
vegetation strata from 3D point clouds captured from an aerial platform. Our
model predicts rasterized occupancy maps for three vegetation strata: lower,
medium, and higher strata. Our training scheme allows our network to only being
supervized with values aggregated over cylindrical plots, which are easier to
produce than pixel-wise or point-wise annotations. Our method outperforms
handcrafted and deep learning baselines in terms of precision while
simultaneously providing visual and interpretable predictions. We provide an
open-source implementation of our method along along a dataset of 199
agricultural plots to train and evaluate occupancy regression algorithms.
Related papers
- NumGrad-Pull: Numerical Gradient Guided Tri-plane Representation for Surface Reconstruction from Point Clouds [41.723434094309184]
Reconstructing continuous surfaces from unoriented and unordered 3D points is a fundamental challenge in computer vision and graphics.
Recent advancements address this problem by training neural signed distance functions to pull 3D location queries to their closest points on a surface.
We introduce NumGrad-Pull, leveraging the representation capability of tri-plane structures to accelerate the learning of signed distance functions.
arXiv Detail & Related papers (2024-11-26T12:54:30Z) - OPUS: Occupancy Prediction Using a Sparse Set [64.60854562502523]
We present a framework to simultaneously predict occupied locations and classes using a set of learnable queries.
OPUS incorporates a suite of non-trivial strategies to enhance model performance.
Our lightest model achieves superior RayIoU on the Occ3D-nuScenes dataset at near 2x FPS, while our heaviest model surpasses previous best results by 6.1 RayIoU.
arXiv Detail & Related papers (2024-09-14T07:44:22Z) - OccNeRF: Advancing 3D Occupancy Prediction in LiDAR-Free Environments [77.0399450848749]
We propose an OccNeRF method for training occupancy networks without 3D supervision.
We parameterize the reconstructed occupancy fields and reorganize the sampling strategy to align with the cameras' infinite perceptive range.
For semantic occupancy prediction, we design several strategies to polish the prompts and filter the outputs of a pretrained open-vocabulary 2D segmentation model.
arXiv Detail & Related papers (2023-12-14T18:58:52Z) - A Meta-Learning Approach to Predicting Performance and Data Requirements [163.4412093478316]
We propose an approach to estimate the number of samples required for a model to reach a target performance.
We find that the power law, the de facto principle to estimate model performance, leads to large error when using a small dataset.
We introduce a novel piecewise power law (PPL) that handles the two data differently.
arXiv Detail & Related papers (2023-03-02T21:48:22Z) - Joint Prediction of Monocular Depth and Structure using Planar and
Parallax Geometry [4.620624344434533]
Supervised learning depth estimation methods can achieve good performance when trained on high-quality ground-truth, like LiDAR data.
We propose a novel approach combining structure information from a promising Plane and Parallax geometry pipeline with depth information into a U-Net supervised learning network.
Our model has impressive performance on depth prediction of thin objects and edges, and compared to structure prediction baseline, our model performs more robustly.
arXiv Detail & Related papers (2022-07-13T17:04:05Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Predicting Vegetation Stratum Occupancy from Airborne LiDAR Data with
Deep Learning [4.129847064263057]
We propose a new deep learning-based method for estimating the occupancy of vegetation from airborne 3D LiDAR point clouds.
Our model predictsized occupancy maps for three vegetation strata corresponding to lower, medium, and higher cover.
arXiv Detail & Related papers (2022-01-20T08:30:27Z) - Learning Semantic Segmentation of Large-Scale Point Clouds with Random
Sampling [52.464516118826765]
We introduce RandLA-Net, an efficient and lightweight neural architecture to infer per-point semantics for large-scale point clouds.
The key to our approach is to use random point sampling instead of more complex point selection approaches.
Our RandLA-Net can process 1 million points in a single pass up to 200x faster than existing approaches.
arXiv Detail & Related papers (2021-07-06T05:08:34Z) - SLPC: a VRNN-based approach for stochastic lidar prediction and
completion in autonomous driving [63.87272273293804]
We propose a new LiDAR prediction framework that is based on generative models namely Variational Recurrent Neural Networks (VRNNs)
Our algorithm is able to address the limitations of previous video prediction frameworks when dealing with sparse data by spatially inpainting the depth maps in the upcoming frames.
We present a sparse version of VRNNs and an effective self-supervised training method that does not require any labels.
arXiv Detail & Related papers (2021-02-19T11:56:44Z) - Machine Learning in LiDAR 3D point clouds [0.0]
We present a preliminary comparison study for the classification of 3D point cloud LiDAR data.
In particular, we demonstrate that providing context by augmenting each point in the LiDAR point cloud with information about its neighboring points can improve the performance of downstream learning algorithms.
We also experiment with several dimension reduction strategies, ranging from Principal Component Analysis (PCA) to neural network-based auto-encoders, and demonstrate how they affect classification performance in LiDAR point clouds.
arXiv Detail & Related papers (2021-01-22T20:23:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.