Contrastive Learning for Unsupervised Radar Place Recognition
- URL: http://arxiv.org/abs/2110.02744v1
- Date: Wed, 6 Oct 2021 13:34:09 GMT
- Title: Contrastive Learning for Unsupervised Radar Place Recognition
- Authors: Matthew Gadd, Daniele De Martini, Paul Newman
- Abstract summary: We learn, in an unsupervised way, an embedding from sequences of radar images that is suitable for solving the place recognition problem with complex radar data.
We experiment across two prominent urban radar datasets totalling over 400 km of driving and show that we achieve a new radar place recognition state-of-the-art.
- Score: 31.04172735067443
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We learn, in an unsupervised way, an embedding from sequences of radar images
that is suitable for solving the place recognition problem with complex radar
data. Our method is based on invariant instance feature learning but is
tailored for the task of re-localisation by exploiting for data augmentation
the temporal successivity of data as collected by a mobile platform moving
through the scene smoothly. We experiment across two prominent urban radar
datasets totalling over 400 km of driving and show that we achieve a new radar
place recognition state-of-the-art. Specifically, the proposed system proves
correct for 98.38% of the queries that it is presented with over a challenging
re-localisation sequence, using only the single nearest neighbour in the
learned metric space. We also find that our learned model shows better
understanding of out-of-lane loop closures at arbitrary orientation than
non-learned radar scan descriptors.
Related papers
- SparseRadNet: Sparse Perception Neural Network on Subsampled Radar Data [5.344444942640663]
Radar raw data often contains excessive noise, whereas radar point clouds retain only limited information.
We introduce an adaptive subsampling method together with a tailored network architecture that exploits the sparsity patterns.
Experiments on the RADIal dataset show that our SparseRadNet exceeds state-of-the-art (SOTA) performance in object detection and achieves close to SOTA accuracy in freespace segmentation.
arXiv Detail & Related papers (2024-06-15T11:26:10Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - RaLF: Flow-based Global and Metric Radar Localization in LiDAR Maps [8.625083692154414]
We propose RaLF, a novel deep neural network-based approach for localizing radar scans in a LiDAR map of the environment.
RaLF is composed of radar and LiDAR feature encoders, a place recognition head that generates global descriptors, and a metric localization head that predicts the 3-DoF transformation between the radar scan and the map.
We extensively evaluate our approach on multiple real-world driving datasets and show that RaLF achieves state-of-the-art performance for both place recognition and metric localization.
arXiv Detail & Related papers (2023-09-18T15:37:01Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - Unsupervised Place Recognition with Deep Embedding Learning over Radar
Videos [31.04172735067443]
We learn, in an unsupervised way, an embedding from sequences of radar images that is suitable for solving place recognition problem using complex radar data.
We show performance exceeding state-of-the-art supervised approaches, localising correctly 98.38% of the time when using just the nearest database candidate.
arXiv Detail & Related papers (2021-06-12T07:14:15Z) - RadarLoc: Learning to Relocalize in FMCW Radar [36.68888832365474]
We propose a novel end-to-end neural network with self-attention, termed RadarLoc, which is able to estimate 6-DoF global poses directly.
We validate our approach on the recently released challenging outdoor dataset Oxford Radar RobotCar.
arXiv Detail & Related papers (2021-03-22T03:22:37Z) - Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning [11.259276512983492]
In this paper, a heterogeneous measurements based framework is proposed for long-term place recognition.
A deep neural network is built with joint training in the learning stage, and then in the testing stage, shared embeddings of radar and lidar are extracted for heterogeneous place recognition.
The experimental results indicate that our model is able to perform multiple place recognitions: lidar-to-lidar, radar-to-radar and radar-to-lidar, while the learned model is trained only once.
arXiv Detail & Related papers (2021-01-30T15:34:58Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.