Improving Extrinsics between RADAR and LIDAR using Learning
- URL: http://arxiv.org/abs/2305.10594v1
- Date: Wed, 17 May 2023 22:04:29 GMT
- Title: Improving Extrinsics between RADAR and LIDAR using Learning
- Authors: Peng Jiang, Srikanth Saripalli
- Abstract summary: This paper presents a novel solution for 3D RADAR-LIDAR calibration in autonomous systems.
The method employs simple targets to generate data, including correspondence registration and a one-step optimization algorithm.
The proposed approach uses a deep learning framework such as PyTorch and can be optimized through gradient descent.
- Score: 18.211513930388417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LIDAR and RADAR are two commonly used sensors in autonomous driving systems.
The extrinsic calibration between the two is crucial for effective sensor
fusion. The challenge arises due to the low accuracy and sparse information in
RADAR measurements. This paper presents a novel solution for 3D RADAR-LIDAR
calibration in autonomous systems. The method employs simple targets to
generate data, including correspondence registration and a one-step
optimization algorithm. The optimization aims to minimize the reprojection
error while utilizing a small multi-layer perception (MLP) to perform
regression on the return energy of the sensor around the targets. The proposed
approach uses a deep learning framework such as PyTorch and can be optimized
through gradient descent. The experiment uses a 360-degree Ouster-128 LIDAR and
a 360-degree Navtech RADAR, providing raw measurements. The results validate
the effectiveness of the proposed method in achieving improved estimates of
extrinsic calibration parameters.
Related papers
- YOCO: You Only Calibrate Once for Accurate Extrinsic Parameter in LiDAR-Camera Systems [0.5999777817331317]
In a multi-sensor fusion system composed of cameras and LiDAR, precise extrinsic calibration contributes to the system's long-term stability and accurate perception of the environment.
This paper proposes a novel fully automatic extrinsic calibration method for LiDAR-camera systems that circumvents the need for corresponding point registration.
arXiv Detail & Related papers (2024-07-25T13:44:49Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.
Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.
This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - From Chaos to Calibration: A Geometric Mutual Information Approach to
Target-Free Camera LiDAR Extrinsic Calibration [4.378156825150505]
We propose a target free extrinsic calibration algorithm that requires no ground truth training data.
We demonstrate our proposed improvement using the KITTI and KITTI-360 fisheye data set.
arXiv Detail & Related papers (2023-11-03T13:30:31Z) - Fixation-based Self-calibration for Eye Tracking in VR Headsets [0.21561701531034413]
The proposed method is based on the assumptions that the user's viewpoint can freely move.
fixations are first detected from the time-series data of uncalibrated gaze directions.
The calibration parameters are optimized by minimizing the sum of a dispersion metrics of the PoRs.
arXiv Detail & Related papers (2023-11-01T09:34:15Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - End-To-End Optimization of LiDAR Beam Configuration for 3D Object
Detection and Localization [87.56144220508587]
We take a new route to learn to optimize the LiDAR beam configuration for a given application.
We propose a reinforcement learning-based learning-to-optimize framework to automatically optimize the beam configuration.
Our method is especially useful when a low-resolution (low-cost) LiDAR is needed.
arXiv Detail & Related papers (2022-01-11T09:46:31Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z) - Accurate Alignment Inspection System for Low-resolution Automotive and
Mobility LiDAR [125.41260574344933]
An accurate inspection system is proposed for estimating a LiDAR alignment error after sensor attachment on a mobility system such as a vehicle or robot.
The proposed method uses only a single target board at the fixed position to estimate the three orientations (roll, tilt, and yaw) and the horizontal position of the LiDAR attachment with sub-degree and millimeter level accuracy.
arXiv Detail & Related papers (2020-08-24T17:47:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.