Tree Annotations in LiDAR Data Using Point Densities and Convolutional
Neural Networks
- URL: http://arxiv.org/abs/2006.05560v1
- Date: Tue, 9 Jun 2020 23:50:40 GMT
- Title: Tree Annotations in LiDAR Data Using Point Densities and Convolutional
Neural Networks
- Authors: Ananya Gupta, Jonathan Byrne, David Moloney, Simon Watson, Hujun Yin
- Abstract summary: We present three automatic methods for annotating trees in LiDAR data.
The first method requires high density point clouds and uses certain LiDAR data attributes for the purpose of tree identification, achieving almost 90% accuracy.
The second method uses a voxel-based 3D Convolutional Neural Network on low density LiDAR datasets and is able to identify most large trees accurately but struggles with smaller ones due to the voxelisation process.
The third method is a scaled version of the PointNet++ method and works directly on outdoor point clouds and achieves an F_score of 82.1% on the ISPRS benchmark dataset
- Score: 9.374986160570034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LiDAR provides highly accurate 3D point clouds. However, data needs to be
manually labelled in order to provide subsequent useful information. Manual
annotation of such data is time consuming, tedious and error prone, and hence
in this paper we present three automatic methods for annotating trees in LiDAR
data. The first method requires high density point clouds and uses certain
LiDAR data attributes for the purpose of tree identification, achieving almost
90% accuracy. The second method uses a voxel-based 3D Convolutional Neural
Network on low density LiDAR datasets and is able to identify most large trees
accurately but struggles with smaller ones due to the voxelisation process. The
third method is a scaled version of the PointNet++ method and works directly on
outdoor point clouds and achieves an F_score of 82.1% on the ISPRS benchmark
dataset, comparable to the state-of-the-art methods but with increased
efficiency.
Related papers
- Benchmarking tree species classification from proximally-sensed laser scanning data: introducing the FOR-species20K dataset [1.2771525473423657]
FOR-species20K benchmark was created, comprising over 20,000 tree point clouds from 33 species.
This dataset enables the benchmarking of DL models for tree species classification.
The top model, DetailView, was particularly robust, handling data imbalances well and generalizing effectively across tree sizes.
arXiv Detail & Related papers (2024-08-12T21:47:15Z) - PointRegGPT: Boosting 3D Point Cloud Registration using Generative Point-Cloud Pairs for Training [90.06520673092702]
We present PointRegGPT, boosting 3D point cloud registration using generative point-cloud pairs for training.
To our knowledge, this is the first generative approach that explores realistic data generation for indoor point cloud registration.
arXiv Detail & Related papers (2024-07-19T06:29:57Z) - LiDAR-based curb detection for ground truth annotation in automated
driving validation [2.954315548942922]
This paper presents a method for detecting 3D curbs in a sequence of point clouds captured from a LiDAR sensor.
A sequence-level processing step estimates the 3D curbs in the reconstructed point cloud using the odometry of the vehicle.
These detections can be used as pre-annotations in labelling pipelines to efficiently generate curb-related ground truth data.
arXiv Detail & Related papers (2023-12-01T12:15:09Z) - V-DETR: DETR with Vertex Relative Position Encoding for 3D Object
Detection [73.37781484123536]
We introduce a highly performant 3D object detector for point clouds using the DETR framework.
To address the limitation, we introduce a novel 3D Relative Position (3DV-RPE) method.
We show exceptional results on the challenging ScanNetV2 benchmark.
arXiv Detail & Related papers (2023-08-08T17:14:14Z) - LiDAR-CS Dataset: LiDAR Point Cloud Dataset with Cross-Sensors for 3D
Object Detection [36.77084564823707]
deep learning methods heavily rely on annotated data and often face domain generalization issues.
LiDAR-CS dataset is the first dataset that addresses the sensor-related gaps in the domain of 3D object detection in real traffic.
arXiv Detail & Related papers (2023-01-29T19:10:35Z) - Structure Aware and Class Balanced 3D Object Detection on nuScenes
Dataset [0.0]
NuTonomy's nuScenes dataset greatly extends commonly used datasets such as KITTI.
The localization precision of this model is affected by the loss of spatial information in the downscaled feature maps.
We propose to enhance the performance of the CBGS model by designing an auxiliary network, that makes full use of the structure information of the 3D point cloud.
arXiv Detail & Related papers (2022-05-25T06:18:49Z) - Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding [62.17020485045456]
It is commonly assumed in semi-supervised learning (SSL) that the unlabeled data are drawn from the same distribution as that of the labeled ones.
We propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized.
arXiv Detail & Related papers (2022-05-02T16:09:17Z) - Lifting 2D Object Locations to 3D by Discounting LiDAR Outliers across
Objects and Views [70.1586005070678]
We present a system for automatically converting 2D mask object predictions and raw LiDAR point clouds into full 3D bounding boxes of objects.
Our method significantly outperforms previous work despite the fact that those methods use significantly more complex pipelines, 3D models and additional human-annotated external sources of prior information.
arXiv Detail & Related papers (2021-09-16T13:01:13Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z) - D3Feat: Joint Learning of Dense Detection and Description of 3D Local
Features [51.04841465193678]
We leverage a 3D fully convolutional network for 3D point clouds.
We propose a novel and practical learning mechanism that densely predicts both a detection score and a description feature for each 3D point.
Our method achieves state-of-the-art results in both indoor and outdoor scenarios.
arXiv Detail & Related papers (2020-03-06T12:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.