Predicting Vegetation Stratum Occupancy from Airborne LiDAR Data with
Deep Learning
- URL: http://arxiv.org/abs/2201.08051v1
- Date: Thu, 20 Jan 2022 08:30:27 GMT
- Title: Predicting Vegetation Stratum Occupancy from Airborne LiDAR Data with
Deep Learning
- Authors: Ekaterina Kalinicheva, Loic Landrieu, Cl\'ement Mallet, Nesrine
Chehata
- Abstract summary: We propose a new deep learning-based method for estimating the occupancy of vegetation from airborne 3D LiDAR point clouds.
Our model predictsized occupancy maps for three vegetation strata corresponding to lower, medium, and higher cover.
- Score: 4.129847064263057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new deep learning-based method for estimating the occupancy of
vegetation strata from airborne 3D LiDAR point clouds. Our model predicts
rasterized occupancy maps for three vegetation strata corresponding to lower,
medium, and higher cover. Our weakly-supervised training scheme allows our
network to only be supervised with vegetation occupancy values aggregated over
cylindrical plots containing thousands of points. Such ground truth is easier
to produce than pixel-wise or point-wise annotations. Our method outperforms
handcrafted and deep learning baselines in terms of precision by up to 30%,
while simultaneously providing visual and interpretable predictions. We provide
an open-source implementation along with a dataset of 199 agricultural plots to
train and evaluate weakly supervised occupancy regression algorithms.
Related papers
- OccNeRF: Advancing 3D Occupancy Prediction in LiDAR-Free Environments [77.0399450848749]
We propose an OccNeRF method for training occupancy networks without 3D supervision.
We parameterize the reconstructed occupancy fields and reorganize the sampling strategy to align with the cameras' infinite perceptive range.
For semantic occupancy prediction, we design several strategies to polish the prompts and filter the outputs of a pretrained open-vocabulary 2D segmentation model.
arXiv Detail & Related papers (2023-12-14T18:58:52Z) - Self-Supervised Pre-Training Boosts Semantic Scene Segmentation on LiDAR
Data [0.0]
We propose to train a self-supervised encoder with Barlow Twins and use it as a pre-trained network in the task of semantic scene segmentation.
The experimental results demonstrate that our unsupervised pre-training boosts performance once fine-tuned on the supervised task.
arXiv Detail & Related papers (2023-09-05T11:29:30Z) - A Meta-Learning Approach to Predicting Performance and Data Requirements [163.4412093478316]
We propose an approach to estimate the number of samples required for a model to reach a target performance.
We find that the power law, the de facto principle to estimate model performance, leads to large error when using a small dataset.
We introduce a novel piecewise power law (PPL) that handles the two data differently.
arXiv Detail & Related papers (2023-03-02T21:48:22Z) - Joint Prediction of Monocular Depth and Structure using Planar and
Parallax Geometry [4.620624344434533]
Supervised learning depth estimation methods can achieve good performance when trained on high-quality ground-truth, like LiDAR data.
We propose a novel approach combining structure information from a promising Plane and Parallax geometry pipeline with depth information into a U-Net supervised learning network.
Our model has impressive performance on depth prediction of thin objects and edges, and compared to structure prediction baseline, our model performs more robustly.
arXiv Detail & Related papers (2022-07-13T17:04:05Z) - Few-Shot Non-Parametric Learning with Deep Latent Variable Model [50.746273235463754]
We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV)
NPC-LV is a learning framework for any dataset with abundant unlabeled data but very few labeled ones.
We show that NPC-LV outperforms supervised methods on all three datasets on image classification in low data regime.
arXiv Detail & Related papers (2022-06-23T09:35:03Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Embedding Earth: Self-supervised contrastive pre-training for dense land
cover classification [61.44538721707377]
We present Embedding Earth a self-supervised contrastive pre-training method for leveraging the large availability of satellite imagery.
We observe significant improvements up to 25% absolute mIoU when pre-trained with our proposed method.
We find that learnt features can generalize between disparate regions opening up the possibility of using the proposed pre-training scheme.
arXiv Detail & Related papers (2022-03-11T16:14:14Z) - Vegetation Stratum Occupancy Prediction from Airborne LiDAR 3D Point
Clouds [5.7047887413125276]
We propose a new deep learning-based method for estimating the occupancy of vegetation strata from 3D point clouds captured from an aerial platform.
Our network is supervized with values aggregated over cylindrical plots, which are easier to produce than pixel-wise or point-wise annotations.
Our method outperforms handcrafted and deep learning baselines in terms of precision while simultaneously providing visual and interpretable predictions.
arXiv Detail & Related papers (2021-12-27T09:33:08Z) - SLPC: a VRNN-based approach for stochastic lidar prediction and
completion in autonomous driving [63.87272273293804]
We propose a new LiDAR prediction framework that is based on generative models namely Variational Recurrent Neural Networks (VRNNs)
Our algorithm is able to address the limitations of previous video prediction frameworks when dealing with sparse data by spatially inpainting the depth maps in the upcoming frames.
We present a sparse version of VRNNs and an effective self-supervised training method that does not require any labels.
arXiv Detail & Related papers (2021-02-19T11:56:44Z) - Machine Learning in LiDAR 3D point clouds [0.0]
We present a preliminary comparison study for the classification of 3D point cloud LiDAR data.
In particular, we demonstrate that providing context by augmenting each point in the LiDAR point cloud with information about its neighboring points can improve the performance of downstream learning algorithms.
We also experiment with several dimension reduction strategies, ranging from Principal Component Analysis (PCA) to neural network-based auto-encoders, and demonstrate how they affect classification performance in LiDAR point clouds.
arXiv Detail & Related papers (2021-01-22T20:23:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.