CMRNext: Camera to LiDAR Matching in the Wild for Localization and Extrinsic Calibration
- URL: http://arxiv.org/abs/2402.00129v3
- Date: Tue, 2 Apr 2024 12:42:42 GMT
- Title: CMRNext: Camera to LiDAR Matching in the Wild for Localization and Extrinsic Calibration
- Authors: Daniele Cattaneo, Abhinav Valada,
- Abstract summary: CMRNext is a novel approach for camera-LIDAR matching that is independent of sensor-specific parameters, generalizable, and can be used in the wild.
We extensively evaluate CMRNext on six different robotic platforms, including three publicly available datasets and three in-house robots.
- Score: 9.693729708337125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LiDARs are widely used for mapping and localization in dynamic environments. However, their high cost limits their widespread adoption. On the other hand, monocular localization in LiDAR maps using inexpensive cameras is a cost-effective alternative for large-scale deployment. Nevertheless, most existing approaches struggle to generalize to new sensor setups and environments, requiring retraining or fine-tuning. In this paper, we present CMRNext, a novel approach for camera-LIDAR matching that is independent of sensor-specific parameters, generalizable, and can be used in the wild for monocular localization in LiDAR maps and camera-LiDAR extrinsic calibration. CMRNext exploits recent advances in deep neural networks for matching cross-modal data and standard geometric techniques for robust pose estimation. We reformulate the point-pixel matching problem as an optical flow estimation problem and solve the Perspective-n-Point problem based on the resulting correspondences to find the relative pose between the camera and the LiDAR point cloud. We extensively evaluate CMRNext on six different robotic platforms, including three publicly available datasets and three in-house robots. Our experimental evaluations demonstrate that CMRNext outperforms existing approaches on both tasks and effectively generalizes to previously unseen environments and sensor setups in a zero-shot manner. We make the code and pre-trained models publicly available at http://cmrnext.cs.uni-freiburg.de .
Related papers
- LCPR: A Multi-Scale Attention-Based LiDAR-Camera Fusion Network for
Place Recognition [11.206532393178385]
We present a novel neural network named LCPR for robust multimodal place recognition.
Our method can effectively utilize multi-view camera and LiDAR data to improve the place recognition performance.
arXiv Detail & Related papers (2023-11-06T15:39:48Z) - INF: Implicit Neural Fusion for LiDAR and Camera [7.123895040455239]
Implicit neural fusion (INF) for LiDAR and camera is proposed in this paper.
INF first trains a neural density field of the target scene using LiDAR frames.
Then, a separate neural color field is trained using camera images and the trained neural density field.
arXiv Detail & Related papers (2023-08-28T08:51:20Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Photometric LiDAR and RGB-D Bundle Adjustment [3.3948742816399697]
This paper presents a novel Bundle Adjustment (BA) photometric strategy that accounts for both RGB-D and LiDAR in the same way.
In addition, we present the benefit of jointly using RGB-D and LiDAR within our unified method.
arXiv Detail & Related papers (2023-03-29T17:35:23Z) - LCE-Calib: Automatic LiDAR-Frame/Event Camera Extrinsic Calibration With
A Globally Optimal Solution [10.117923901732743]
The combination of LiDARs and cameras enables a mobile robot to perceive environments with multi-modal data.
Traditional frame cameras are sensitive to changing illumination conditions, motivating us to introduce novel event cameras.
This paper proposes an automatic checkerboard-based approach to calibrate extrinsics between a LiDAR and a frame/event camera.
arXiv Detail & Related papers (2023-03-17T08:07:56Z) - From One to Many: Dynamic Cross Attention Networks for LiDAR and Camera
Fusion [12.792769704561024]
Existing fusion methods tend to align each 3D point to only one projected image pixel based on calibration.
We propose a Dynamic Cross Attention (DCA) module with a novel one-to-many cross-modality mapping.
The whole fusion architecture named Dynamic Cross Attention Network (DCAN) exploits multi-level image features and adapts to multiple representations of point clouds.
arXiv Detail & Related papers (2022-09-25T16:10:14Z) - LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR
Point Clouds [58.402752909624716]
Existing motion capture datasets are largely short-range and cannot yet fit the need of long-range applications.
We propose LiDARHuman26M, a new human motion capture dataset captured by LiDAR at a much longer range to overcome this limitation.
Our dataset also includes the ground truth human motions acquired by the IMU system and the synchronous RGB images.
arXiv Detail & Related papers (2022-03-28T12:52:45Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z) - 6D Camera Relocalization in Ambiguous Scenes via Continuous Multimodal
Inference [67.70859730448473]
We present a multimodal camera relocalization framework that captures ambiguities and uncertainties.
We predict multiple camera pose hypotheses as well as the respective uncertainty for each prediction.
We introduce a new dataset specifically designed to foster camera localization research in ambiguous environments.
arXiv Detail & Related papers (2020-04-09T20:55:06Z) - Deep Soft Procrustes for Markerless Volumetric Sensor Alignment [81.13055566952221]
In this work, we improve markerless data-driven correspondence estimation to achieve more robust multi-sensor spatial alignment.
We incorporate geometric constraints in an end-to-end manner into a typical segmentation based model and bridge the intermediate dense classification task with the targeted pose estimation one.
Our model is experimentally shown to achieve similar results with marker-based methods and outperform the markerless ones, while also being robust to the pose variations of the calibration structure.
arXiv Detail & Related papers (2020-03-23T10:51:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.