RaLL: End-to-end Radar Localization on Lidar Map Using Differentiable
Measurement Model
- URL: http://arxiv.org/abs/2009.07061v3
- Date: Sat, 6 Mar 2021 03:17:49 GMT
- Title: RaLL: End-to-end Radar Localization on Lidar Map Using Differentiable
Measurement Model
- Authors: Huan Yin, Runjian Chen, Yue Wang and Rong Xiong
- Abstract summary: We propose an end-to-end deep learning framework for Radar Localization on Lidar Map (RaLL)
RaLL exploits the mature lidar mapping technique, thus reducing the cost of radar mapping.
Our proposed system achieves superior performance over $90km$ driving, even in generalization scenarios where the model training is in UK.
- Score: 14.155337185792279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compared to the onboard camera and laser scanner, radar sensor provides
lighting and weather invariant sensing, which is naturally suitable for
long-term localization under adverse conditions. However, radar data is sparse
and noisy, resulting in challenges for radar mapping. On the other hand, the
most popular available map currently is built by lidar. In this paper, we
propose an end-to-end deep learning framework for Radar Localization on Lidar
Map (RaLL) to bridge the gap, which not only achieves the robust radar
localization but also exploits the mature lidar mapping technique, thus
reducing the cost of radar mapping. We first embed both sensor modals into a
common feature space by a neural network. Then multiple offsets are added to
the map modal for exhaustive similarity evaluation against the current radar
modal, yielding the regression of the current pose. Finally, we apply this
differentiable measurement model to a Kalman Filter (KF) to learn the whole
sequential localization process in an end-to-end manner. \textit{The whole
learning system is differentiable with the network based measurement model at
the front-end and KF at the back-end.} To validate the feasibility and
effectiveness, we employ multi-session multi-scene datasets collected from the
real world, and the results demonstrate that our proposed system achieves
superior performance over $90km$ driving, even in generalization scenarios
where the model training is in UK, while testing in South Korea. We also
release the source code publicly.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - RaLF: Flow-based Global and Metric Radar Localization in LiDAR Maps [8.625083692154414]
We propose RaLF, a novel deep neural network-based approach for localizing radar scans in a LiDAR map of the environment.
RaLF is composed of radar and LiDAR feature encoders, a place recognition head that generates global descriptors, and a metric localization head that predicts the 3-DoF transformation between the radar scan and the map.
We extensively evaluate our approach on multiple real-world driving datasets and show that RaLF achieves state-of-the-art performance for both place recognition and metric localization.
arXiv Detail & Related papers (2023-09-18T15:37:01Z) - Pointing the Way: Refining Radar-Lidar Localization Using Learned ICP Weights [10.613476233222347]
We build on ICP-based radar-lidar localization by including a learned preprocessing step that weights radar points based on high-level scan information.
To train the weight-generating network, we present a novel, stand-alone, open-source differentiable ICP library.
arXiv Detail & Related papers (2023-09-15T19:37:58Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Energy-Based Models for Cross-Modal Localization using Convolutional
Transformers [52.27061799824835]
We present a novel framework for localizing a ground vehicle mounted with a range sensor against satellite imagery in the absence of GPS.
We propose a method using convolutional transformers that performs accurate metric-level localization in a cross-modal manner.
We train our model end-to-end and demonstrate our approach achieving higher accuracy than the state-of-the-art on KITTI, Pandaset, and a custom dataset.
arXiv Detail & Related papers (2023-06-06T21:27:08Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Deep Radar Inverse Sensor Models for Dynamic Occupancy Grid Maps [0.0]
We propose a deep learning-based Inverse Sensor Model (ISM) to learn the mapping from sparse radar detections to polar measurement grids.
Our approach is the first one to learn a single-frame measurement grid in the polar scheme from radars with a limited Field Of View.
This enables us to flexibly use one or more radar sensors without network retraining and without requirements on 360deg sensor coverage.
arXiv Detail & Related papers (2023-05-21T09:09:23Z) - Radar-based Automotive Localization using Landmarks in a Multimodal
Sensor Graph-based Approach [0.0]
In this paper, we address the problem of localization with automotive-grade radars.
The system uses landmarks and odometry information as an abstraction layer.
A single, semantic landmark map is used and maintained for all sensors.
arXiv Detail & Related papers (2021-04-29T07:35:20Z) - RadarLoc: Learning to Relocalize in FMCW Radar [36.68888832365474]
We propose a novel end-to-end neural network with self-attention, termed RadarLoc, which is able to estimate 6-DoF global poses directly.
We validate our approach on the recently released challenging outdoor dataset Oxford Radar RobotCar.
arXiv Detail & Related papers (2021-03-22T03:22:37Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.