Efficient divide-and-conquer registration of UAV and ground LiDAR point
clouds through canopy shape context
- URL: http://arxiv.org/abs/2201.11296v1
- Date: Thu, 27 Jan 2022 03:29:56 GMT
- Title: Efficient divide-and-conquer registration of UAV and ground LiDAR point
clouds through canopy shape context
- Authors: Jie Shao, Wei Yao, Peng Wan, Lei Luo, Jiaxin Lyu, Wuming Zhang
- Abstract summary: We propose an automated and efficient method to register ULS and ground LiDAR point clouds in forests.
The proposed method uses coarse alignment and fine registration, where the coarse alignment of point clouds is divided into vertical and horizontal alignment.
Experimental results show that the ULS and ground LiDAR data in different plots are registered, of which the horizontal alignment errors are less than 0.02 m.
- Score: 35.08788703582076
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Registration of unmanned aerial vehicle laser scanning (ULS) and ground light
detection and ranging (LiDAR) point clouds in forests is critical to create a
detailed representation of a forest structure and an accurate inversion of
forest parameters. However, forest occlusion poses challenges for marker-based
registration methods, and some marker-free automated registration methods have
low efficiency due to the process of object (e.g., tree, crown) segmentation.
Therefore, we use a divide-and-conquer strategy and propose an automated and
efficient method to register ULS and ground LiDAR point clouds in forests.
Registration involves coarse alignment and fine registration, where the coarse
alignment of point clouds is divided into vertical and horizontal alignment.
The vertical alignment is achieved by ground alignment, which is achieved by
the transformation relationship between normal vectors of the ground point
cloud and the horizontal plane, and the horizontal alignment is achieved by
canopy projection image matching. During image matching, vegetation points are
first distinguished by the ground filtering algorithm, and then, vegetation
points are projected onto the horizontal plane to obtain two binary images. To
match the two images, a matching strategy is used based on canopy shape context
features, which are described by a two-point congruent set and canopy overlap.
Finally, we implement coarse alignment of ULS and ground LiDAR datasets by
combining the results of ground alignment and image matching and finish fine
registration. Also, the effectiveness, accuracy, and efficiency of the proposed
method are demonstrated by field measurements of forest plots. Experimental
results show that the ULS and ground LiDAR data in different plots are
registered, of which the horizontal alignment errors are less than 0.02 m, and
the average runtime of the proposed method is less than 1 second.
Related papers
- Automatic marker-free registration based on similar tetrahedras for single-tree point clouds [14.043846409201112]
This paper proposes a marker-free automatic registration method for single-tree point clouds based on similar tetrahedras.
The proposed method significantly outperforms both ICP and NDT in registration accuracy, achieving speeds up to 593 times and 113 times faster than ICP and NDT, respectively.
arXiv Detail & Related papers (2024-11-20T06:34:47Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - Boosting Few-shot Fine-grained Recognition with Background Suppression
and Foreground Alignment [53.401889855278704]
Few-shot fine-grained recognition (FS-FGR) aims to recognize novel fine-grained categories with the help of limited available samples.
We propose a two-stage background suppression and foreground alignment framework, which is composed of a background activation suppression (BAS) module, a foreground object alignment (FOA) module, and a local to local (L2L) similarity metric.
Experiments conducted on multiple popular fine-grained benchmarks demonstrate that our method outperforms the existing state-of-the-art by a large margin.
arXiv Detail & Related papers (2022-10-04T07:54:40Z) - 2D LiDAR and Camera Fusion Using Motion Cues for Indoor Layout
Estimation [2.6905021039717987]
A ground robot explores an indoor space with a single floor and vertical walls, and collects a sequence of intensity images and 2D LiDAR datasets.
The alignment of sensor outputs and image segmentation are computed jointly by aligning LiDAR points.
The ambiguity in images for ground-wall boundary extraction is removed with the assistance of LiDAR observations.
arXiv Detail & Related papers (2022-04-24T06:26:02Z) - Precise Aerial Image Matching based on Deep Homography Estimation [21.948001630564363]
We propose a deep homography alignment network to precisely match two aerial images.
The proposed network is possible to train the matching network with a higher degree of freedom.
We introduce a method that can effectively learn the difficult-to-learn homography estimation network.
arXiv Detail & Related papers (2021-07-19T11:52:52Z) - Where am I looking at? Joint Location and Orientation Estimation by
Cross-View Matching [95.64702426906466]
Cross-view geo-localization is a problem given a large-scale database of geo-tagged aerial images.
Knowing orientation between ground and aerial images can significantly reduce matching ambiguity between these two views.
We design a Dynamic Similarity Matching network to estimate cross-view orientation alignment during localization.
arXiv Detail & Related papers (2020-05-08T05:21:16Z) - Plan2Vec: Unsupervised Representation Learning by Latent Plans [106.37274654231659]
We introduce plan2vec, an unsupervised representation learning approach that is inspired by reinforcement learning.
Plan2vec constructs a weighted graph on an image dataset using near-neighbor distances, and then extrapolates this local metric to a global embedding by distilling path-integral over planned path.
We demonstrate the effectiveness of plan2vec on one simulated and two challenging real-world image datasets.
arXiv Detail & Related papers (2020-05-07T17:52:23Z) - Automatic marker-free registration of tree point-cloud data based on
rotating projection [23.08199833637939]
We propose an automatic coarse-to-fine method for the registration of point-cloud data from multiple scans of a single tree.
In coarse registration, point clouds produced by each scan are projected onto a spherical surface to generate a series of 2D images.
corresponding feature-point pairs are then extracted from these series of 2D images.
In fine registration, point-cloud data slicing and fitting methods are used to extract corresponding central stem and branch centers.
arXiv Detail & Related papers (2020-01-30T06:53:59Z) - From Planes to Corners: Multi-Purpose Primitive Detection in Unorganized
3D Point Clouds [59.98665358527686]
We propose a new method for segmentation-free joint estimation of orthogonal planes.
Such unified scene exploration allows for multitudes of applications such as semantic plane detection or local and global scan alignment.
Our experiments demonstrate the validity of our approach in numerous scenarios from wall detection to 6D tracking.
arXiv Detail & Related papers (2020-01-21T06:51:47Z) - TCM-ICP: Transformation Compatibility Measure for Registering Multiple
LIDAR Scans [4.5412347600435465]
We present an algorithm for registering multiple, overlapping LiDAR scans.
In this work, we introduce a geometric metric called Transformation Compatibility Measure (TCM) which aids in choosing the most similar point clouds for registration.
We evaluate the proposed algorithm on four different real world scenes and experimental results shows that the registration performance of the proposed method is comparable or superior to the traditionally used registration methods.
arXiv Detail & Related papers (2020-01-04T21:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.