Radar-based Automotive Localization using Landmarks in a Multimodal
Sensor Graph-based Approach
- URL: http://arxiv.org/abs/2104.14156v1
- Date: Thu, 29 Apr 2021 07:35:20 GMT
- Title: Radar-based Automotive Localization using Landmarks in a Multimodal
Sensor Graph-based Approach
- Authors: Stefan J\"urgens, Niklas Koch and Marc-Michael Meinecke
- Abstract summary: In this paper, we address the problem of localization with automotive-grade radars.
The system uses landmarks and odometry information as an abstraction layer.
A single, semantic landmark map is used and maintained for all sensors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Highly automated driving functions currently often rely on a-priori knowledge
from maps for planning and prediction in complex scenarios like cities. This
makes map-relative localization an essential skill. In this paper, we address
the problem of localization with automotive-grade radars, using a real-time
graph-based SLAM approach. The system uses landmarks and odometry information
as an abstraction layer. This way, besides radars, all kind of different sensor
modalities including cameras and lidars can contribute. A single, semantic
landmark map is used and maintained for all sensors. We implemented our
approach using C++ and thoroughly tested it on data obtained with our test
vehicles, comprising cars and trucks. Test scenarios include inner cities and
industrial areas like container terminals. The experiments presented in this
paper suggest that the approach is able to provide a precise and stable pose in
structured environments, using radar data alone. The fusion of additional
sensor information from cameras or lidars further boost performance, providing
reliable semantic information needed for automated mapping.
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - Leveraging GNSS and Onboard Visual Data from Consumer Vehicles for Robust Road Network Estimation [18.236615392921273]
This paper addresses the challenge of road graph construction for autonomous vehicles.
We propose using global navigation satellite system (GNSS) traces and basic image data acquired from these standard sensors in consumer vehicles.
We exploit the spatial information in the data by framing the problem as a road centerline semantic segmentation task using a convolutional neural network.
arXiv Detail & Related papers (2024-08-03T02:57:37Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Online Map Vectorization for Autonomous Driving: A Rasterization
Perspective [58.71769343511168]
We introduce a newization-based evaluation metric, which has superior sensitivity and is better suited to real-world autonomous driving scenarios.
We also propose MapVR (Map Vectorization via Rasterization), a novel framework that applies differentiableization to preciseized outputs and then performs geometry-aware supervision on HD maps.
arXiv Detail & Related papers (2023-06-18T08:51:14Z) - Energy-Based Models for Cross-Modal Localization using Convolutional
Transformers [52.27061799824835]
We present a novel framework for localizing a ground vehicle mounted with a range sensor against satellite imagery in the absence of GPS.
We propose a method using convolutional transformers that performs accurate metric-level localization in a cross-modal manner.
We train our model end-to-end and demonstrate our approach achieving higher accuracy than the state-of-the-art on KITTI, Pandaset, and a custom dataset.
arXiv Detail & Related papers (2023-06-06T21:27:08Z) - RaLL: End-to-end Radar Localization on Lidar Map Using Differentiable
Measurement Model [14.155337185792279]
We propose an end-to-end deep learning framework for Radar Localization on Lidar Map (RaLL)
RaLL exploits the mature lidar mapping technique, thus reducing the cost of radar mapping.
Our proposed system achieves superior performance over $90km$ driving, even in generalization scenarios where the model training is in UK.
arXiv Detail & Related papers (2020-09-15T13:13:38Z) - Radar-based Dynamic Occupancy Grid Mapping and Object Detection [55.74894405714851]
In recent years, the classical occupancy grid map approach has been extended to dynamic occupancy grid maps.
This paper presents the further development of a previous approach.
The data of multiple radar sensors are fused, and a grid-based object tracking and mapping method is applied.
arXiv Detail & Related papers (2020-08-09T09:26:30Z) - CARRADA Dataset: Camera and Automotive Radar with Range-Angle-Doppler
Annotations [0.0]
We introduce CARRADA, a dataset of synchronized camera and radar recordings with range-angle-Doppler annotations.
We also present a semi-automatic annotation approach, which was used to annotate the dataset, and a radar semantic segmentation baseline.
arXiv Detail & Related papers (2020-05-04T13:14:29Z) - Extraction and Assessment of Naturalistic Human Driving Trajectories
from Infrastructure Camera and Radar Sensors [0.0]
We present a novel methodology to extract trajectories of traffic objects using infrastructure sensors.
Our vision pipeline accurately detects objects, fuses camera and radar detections and tracks them over time.
We show that our sensor fusion approach successfully combines the advantages of camera and radar detections and outperforms either single sensor.
arXiv Detail & Related papers (2020-04-02T22:28:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.