Simulated LiDAR Repositioning: a novel point cloud data augmentation
method
- URL: http://arxiv.org/abs/2111.10650v1
- Date: Sat, 20 Nov 2021 18:35:39 GMT
- Title: Simulated LiDAR Repositioning: a novel point cloud data augmentation
method
- Authors: Xavier Morin-Duchesne (1) and Michael S Langer (1) ((1) McGill
University)
- Abstract summary: Given a LiDAR scan of a scene from some position, how can one simulate new scans of that scene from different, secondary positions?
The method defines criteria for selecting valid secondary positions, and then estimates which points from the original point cloud would be acquired by a scanner from these positions.
We show that the method is more accurate at short distances, and that having a high scanner resolution for the original point clouds has a strong impact on the similarity of generated point clouds.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address a data augmentation problem for LiDAR. Given a LiDAR scan of a
scene from some position, how can one simulate new scans of that scene from
different, secondary positions? The method defines criteria for selecting valid
secondary positions, and then estimates which points from the original point
cloud would be acquired by a scanner from these positions. We validate the
method using synthetic scenes, and examine how the similarity of generated
point clouds depends on scanner distance, occlusion, and angular resolution. We
show that the method is more accurate at short distances, and that having a
high scanner resolution for the original point clouds has a strong impact on
the similarity of generated point clouds. We also demonstrate how the method
can be applied to natural scene statistics: in particular, we apply our method
to reposition the scanner horizontally and vertically, separately consider
points belonging to the ground and to non-ground objects, and describe the
impact on the distributions of distances to these two classes of points.
Related papers
- Simultaneous Diffusion Sampling for Conditional LiDAR Generation [24.429704313319398]
This paper proposes a novel simultaneous diffusion sampling methodology to generate point clouds conditioned on the 3D structure of the scene.
Our method can produce accurate and geometrically consistent enhancements to point cloud scans, allowing it to outperform existing methods by a large margin in a variety of benchmarks.
arXiv Detail & Related papers (2024-10-15T14:15:04Z) - Learning Continuous Implicit Field with Local Distance Indicator for
Arbitrary-Scale Point Cloud Upsampling [55.05706827963042]
Point cloud upsampling aims to generate dense and uniformly distributed point sets from a sparse point cloud.
Previous methods typically split a sparse point cloud into several local patches, upsample patch points, and merge all upsampled patches.
We propose a novel approach that learns an unsigned distance field guided by local priors for point cloud upsampling.
arXiv Detail & Related papers (2023-12-23T01:52:14Z) - Quadric Representations for LiDAR Odometry, Mapping and Localization [93.24140840537912]
Current LiDAR odometry, mapping and localization methods leverage point-wise representations of 3D scenes.
We propose a novel method of describing scenes using quadric surfaces, which are far more compact representations of 3D objects.
Our method maintains low latency and memory utility while achieving competitive, and even superior, accuracy.
arXiv Detail & Related papers (2023-04-27T13:52:01Z) - (LC)$^2$: LiDAR-Camera Loop Constraints For Cross-Modal Place
Recognition [0.9449650062296824]
We propose a novel cross-matching method, called (LC)$2$, for achieving LiDAR localization without a prior point cloud map.
Network is trained to extract localization descriptors from disparity and range images.
We demonstrate that LiDAR-based navigation systems could be optimized from image databases and vice versa.
arXiv Detail & Related papers (2023-04-17T23:20:16Z) - PolarMix: A General Data Augmentation Technique for LiDAR Point Clouds [100.03877236181546]
PolarMix is a point cloud augmentation technique that is simple and generic.
It can work as plug-and-play for various 3D deep architectures and also performs well for unsupervised domain adaptation.
arXiv Detail & Related papers (2022-07-30T13:52:19Z) - Dynamic 3D Scene Analysis by Point Cloud Accumulation [32.491921765128936]
Multi-beam LiDAR sensors are used on autonomous vehicles and mobile robots.
Each frame covers the scene sparsely, due to limited angular scanning resolution and occlusion.
We propose a method that exploits inductive biases of outdoor street scenes, including their geometric layout and object-level rigidity.
arXiv Detail & Related papers (2022-07-25T17:57:46Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - DeepI2P: Image-to-Point Cloud Registration via Deep Classification [71.3121124994105]
DeepI2P is a novel approach for cross-modality registration between an image and a point cloud.
Our method estimates the relative rigid transformation between the coordinate frames of the camera and Lidar.
We circumvent the difficulty by converting the registration problem into a classification and inverse camera projection optimization problem.
arXiv Detail & Related papers (2021-04-08T04:27:32Z) - City-scale Scene Change Detection using Point Clouds [71.73273007900717]
We propose a method for detecting structural changes in a city using images captured from mounted cameras over two different times.
A direct comparison of the two point clouds for change detection is not ideal due to inaccurate geo-location information.
To circumvent this problem, we propose a deep learning-based non-rigid registration on the point clouds.
Experiments show that our method is able to detect scene changes effectively, even in the presence of viewpoint and illumination differences.
arXiv Detail & Related papers (2021-03-26T08:04:13Z) - Robust Place Recognition using an Imaging Lidar [45.37172889338924]
We propose a methodology for robust, real-time place recognition using an imaging lidar.
Our method is truly-invariant and can tackle reverse revisiting and upside-down revisiting.
arXiv Detail & Related papers (2021-03-03T01:08:31Z) - Automatic marker-free registration of tree point-cloud data based on
rotating projection [23.08199833637939]
We propose an automatic coarse-to-fine method for the registration of point-cloud data from multiple scans of a single tree.
In coarse registration, point clouds produced by each scan are projected onto a spherical surface to generate a series of 2D images.
corresponding feature-point pairs are then extracted from these series of 2D images.
In fine registration, point-cloud data slicing and fitting methods are used to extract corresponding central stem and branch centers.
arXiv Detail & Related papers (2020-01-30T06:53:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.