A Simple and Efficient Registration of 3D Point Cloud and Image Data for
Indoor Mobile Mapping System
- URL: http://arxiv.org/abs/2010.14261v1
- Date: Tue, 27 Oct 2020 13:01:54 GMT
- Title: A Simple and Efficient Registration of 3D Point Cloud and Image Data for
Indoor Mobile Mapping System
- Authors: Hao Ma, Jingbin Liu, Keke Liu, Hongyu Qiu, Dong Xu, Zemin Wang,
Xiaodong Gong, Sheng Yang (State Key Laboratory of Information Engineering in
Survering, Mapping and Remote Sensing, Wuhan University)
- Abstract summary: registration of 3D LiDAR point clouds with optical images is critical in the combination of multi-source data.
Geometric misalignment originally exists in the pose data between LiDAR point clouds and optical images.
We develop a simple but efficient registration method to improve the accuracy of the initial pose.
- Score: 18.644879251473647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Registration of 3D LiDAR point clouds with optical images is critical in the
combination of multi-source data. Geometric misalignment originally exists in
the pose data between LiDAR point clouds and optical images. To improve the
accuracy of the initial pose and the applicability of the integration of 3D
points and image data, we develop a simple but efficient registration method.
We firstly extract point features from LiDAR point clouds and images: point
features is extracted from single-frame LiDAR and point features from images
using classical Canny method. Cost map is subsequently built based on Canny
image edge detection. The optimization direction is guided by the cost map
where low cost represents the the desired direction, and loss function is also
considered to improve the robustness of the the purposed method. Experiments
show pleasant results.
Related papers
- TexLiDAR: Automated Text Understanding for Panoramic LiDAR Data [0.6144680854063939]
Efforts to connect LiDAR data with text, such as LidarCLIP, have primarily focused on embedding 3D point clouds into CLIP text-image space.
We propose an alternative approach to connect LiDAR data with text by leveraging 2D imagery generated by the OS1 sensor instead of 3D point clouds.
arXiv Detail & Related papers (2025-02-05T19:41:06Z) - Mapping and Localization Using LiDAR Fiducial Markers [0.8702432681310401]
dissertation proposes a novel framework for mapping and localization using LiDAR fiducial markers.
An Intensity Image-based LiDAR Fiducial Marker (IFM) system is introduced, using thin, letter-sized markers compatible with visual fiducial markers.
New LFM-based mapping and localization method registers unordered, low-overlap point clouds.
arXiv Detail & Related papers (2025-02-05T17:33:59Z) - PointRegGPT: Boosting 3D Point Cloud Registration using Generative Point-Cloud Pairs for Training [90.06520673092702]
We present PointRegGPT, boosting 3D point cloud registration using generative point-cloud pairs for training.
To our knowledge, this is the first generative approach that explores realistic data generation for indoor point cloud registration.
arXiv Detail & Related papers (2024-07-19T06:29:57Z) - Self-supervised Learning of LiDAR 3D Point Clouds via 2D-3D Neural Calibration [107.61458720202984]
This paper introduces a novel self-supervised learning framework for enhancing 3D perception in autonomous driving scenes.
We propose the learnable transformation alignment to bridge the domain gap between image and point cloud data.
We establish dense 2D-3D correspondences to estimate the rigid pose.
arXiv Detail & Related papers (2024-01-23T02:41:06Z) - GraphAlign: Enhancing Accurate Feature Alignment by Graph matching for
Multi-Modal 3D Object Detection [7.743525134435137]
LiDAR and cameras are complementary sensors for 3D object detection in autonomous driving.
We present GraphAlign, a more accurate feature alignment strategy for 3D object detection by graph matching.
arXiv Detail & Related papers (2023-10-12T12:06:31Z) - Quadric Representations for LiDAR Odometry, Mapping and Localization [93.24140840537912]
Current LiDAR odometry, mapping and localization methods leverage point-wise representations of 3D scenes.
We propose a novel method of describing scenes using quadric surfaces, which are far more compact representations of 3D objects.
Our method maintains low latency and memory utility while achieving competitive, and even superior, accuracy.
arXiv Detail & Related papers (2023-04-27T13:52:01Z) - (LC)$^2$: LiDAR-Camera Loop Constraints For Cross-Modal Place
Recognition [0.9449650062296824]
We propose a novel cross-matching method, called (LC)$2$, for achieving LiDAR localization without a prior point cloud map.
Network is trained to extract localization descriptors from disparity and range images.
We demonstrate that LiDAR-based navigation systems could be optimized from image databases and vice versa.
arXiv Detail & Related papers (2023-04-17T23:20:16Z) - Real-Time Simultaneous Localization and Mapping with LiDAR intensity [9.374695605941627]
We propose a novel real-time LiDAR intensity image-based simultaneous localization and mapping method.
Our method can run in real time with high accuracy and works well with illumination changes, low-texture, and unstructured environments.
arXiv Detail & Related papers (2023-01-23T03:59:48Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - MonoDistill: Learning Spatial Features for Monocular 3D Object Detection [80.74622486604886]
We propose a simple and effective scheme to introduce the spatial information from LiDAR signals to the monocular 3D detectors.
We use the resulting data to train a 3D detector with the same architecture as the baseline model.
Experimental results show that the proposed method can significantly boost the performance of the baseline model.
arXiv Detail & Related papers (2022-01-26T09:21:41Z) - Revisiting Point Cloud Simplification: A Learnable Feature Preserving
Approach [57.67932970472768]
Mesh and Point Cloud simplification methods aim to reduce the complexity of 3D models while retaining visual quality and relevant salient features.
We propose a fast point cloud simplification method by learning to sample salient points.
The proposed method relies on a graph neural network architecture trained to select an arbitrary, user-defined, number of points from the input space and to re-arrange their positions so as to minimize the visual perception error.
arXiv Detail & Related papers (2021-09-30T10:23:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.