Fast and Robust Registration of Aerial Images and LiDAR data Based on
Structrual Features and 3D Phase Correlation
- URL: http://arxiv.org/abs/2004.09811v1
- Date: Tue, 21 Apr 2020 08:19:56 GMT
- Title: Fast and Robust Registration of Aerial Images and LiDAR data Based on
Structrual Features and 3D Phase Correlation
- Authors: Bai Zhu, Yuanxin Ye, Chao Yang, Liang Zhou, Huiyu Liu, Yungang Cao
- Abstract summary: This paper proposes an automatic registration method based on structural features and three-dimension (3D) phase correlation.
Experiments with two datasets of aerial images and LiDAR data show that the proposed method is much faster and more robust than state of the art methods.
- Score: 6.3812295314207335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Co-Registration of aerial imagery and Light Detection and Ranging (LiDAR)
data is quilt challenging because the different imaging mechanism causes
significant geometric and radiometric distortions between such data. To tackle
the problem, this paper proposes an automatic registration method based on
structural features and three-dimension (3D) phase correlation. In the proposed
method, the LiDAR point cloud data is first transformed into the intensity map,
which is used as the reference image. Then, we employ the Fast operator to
extract uniformly distributed interest points in the aerial image by a
partition strategy and perform a local geometric correction by using the
collinearity equation to eliminate scale and rotation difference between
images. Subsequently, a robust structural feature descriptor is build based on
dense gradient features, and the 3D phase correlation is used to detect control
points (CPs) between aerial images and LiDAR data in the frequency domain,
where the image matching is accelerated by the 3D Fast Fourier Transform (FFT).
Finally, the obtained CPs are employed to correct the exterior orientation
elements, which is used to achieve co-registration of aerial images and LiDAR
data. Experiments with two datasets of aerial images and LiDAR data show that
the proposed method is much faster and more robust than state of the art
methods
Related papers
- Gaussian Representation for Deformable Image Registration [12.226244219255197]
We introduce a novel DIR approach employing parametric 3D Gaussian control points.
It provides an explicit and flexible representation for spatial fields between 3D medical images.
We validated our approach on the 4D-CT lung DIR-Lab and cardiac ACDC datasets, achieving an average target registration error (TRE) of 1.06 mm within a much-improved processing time of 2.43 seconds.
arXiv Detail & Related papers (2024-06-05T15:44:54Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - Misalignment-Robust Frequency Distribution Loss for Image Transformation [51.0462138717502]
This paper aims to address a common challenge in deep learning-based image transformation methods, such as image enhancement and super-resolution.
We introduce a novel and simple Frequency Distribution Loss (FDL) for computing distribution distance within the frequency domain.
Our method is empirically proven effective as a training constraint due to the thoughtful utilization of global information in the frequency domain.
arXiv Detail & Related papers (2024-02-28T09:27:41Z) - DiffusionPCR: Diffusion Models for Robust Multi-Step Point Cloud
Registration [73.37538551605712]
Point Cloud Registration (PCR) estimates the relative rigid transformation between two point clouds.
We propose formulating PCR as a denoising diffusion probabilistic process, mapping noisy transformations to the ground truth.
Our experiments showcase the effectiveness of our DiffusionPCR, yielding state-of-the-art registration recall rates (95.3%/81.6%) on 3D and 3DLoMatch.
arXiv Detail & Related papers (2023-12-05T18:59:41Z) - Learning transformer-based heterogeneously salient graph representation for multimodal remote sensing image classification [42.15709954199397]
A transformer-based heterogeneously salient graph representation (THSGR) approach is proposed in this paper.
First, a multimodal heterogeneous graph encoder is presented to encode distinctively non-Euclidean structural features from heterogeneous data.
A self-attention-free multi-convolutional modulator is designed for effective and efficient long-term dependency modeling.
arXiv Detail & Related papers (2023-11-17T04:06:20Z) - From One to Many: Dynamic Cross Attention Networks for LiDAR and Camera
Fusion [12.792769704561024]
Existing fusion methods tend to align each 3D point to only one projected image pixel based on calibration.
We propose a Dynamic Cross Attention (DCA) module with a novel one-to-many cross-modality mapping.
The whole fusion architecture named Dynamic Cross Attention Network (DCAN) exploits multi-level image features and adapts to multiple representations of point clouds.
arXiv Detail & Related papers (2022-09-25T16:10:14Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with
Transformers [49.689566246504356]
We propose TransFusion, a robust solution to LiDAR-camera fusion with a soft-association mechanism to handle inferior image conditions.
TransFusion achieves state-of-the-art performance on large-scale datasets.
We extend the proposed method to the 3D tracking task and achieve the 1st place in the leaderboard of nuScenes tracking.
arXiv Detail & Related papers (2022-03-22T07:15:13Z) - Robust Registration of Multimodal Remote Sensing Images Based on
Structural Similarity [11.512088109547294]
This paper proposes a novel feature descriptor named the Histogram of Orientated Phase Congruency (HOPC)
HOPC is based on the structural properties of images.
A similarity metric named HOPCncc is defined, which uses the normalized correlation coefficient (NCC) of the HOPC descriptors for multimodal registration.
arXiv Detail & Related papers (2021-03-31T07:51:21Z) - Correlation Plenoptic Imaging between Arbitrary Planes [52.77024349608834]
We show that the protocol enables to change the focused planes, in post-processing, and to achieve an unprecedented combination of image resolution and depth of field.
Results lead the way towards the development of compact designs for correlation plenoptic imaging devices based on chaotic light, as well as high-SNR plenoptic imaging devices based on entangled photon illumination.
arXiv Detail & Related papers (2020-07-23T14:26:14Z) - Leveraging Photogrammetric Mesh Models for Aerial-Ground Feature Point
Matching Toward Integrated 3D Reconstruction [19.551088857830944]
Integration of aerial and ground images has been proved as an efficient approach to enhance the surface reconstruction in urban environments.
Previous studies based on geometry-aware image rectification have alleviated this problem.
We propose a novel approach: leveraging photogrammetric mesh models for aerial-ground image matching.
arXiv Detail & Related papers (2020-02-21T01:47:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.