Self-Supervised Localisation between Range Sensors and Overhead Imagery
- URL: http://arxiv.org/abs/2006.02108v2
- Date: Wed, 23 Sep 2020 12:49:46 GMT
- Title: Self-Supervised Localisation between Range Sensors and Overhead Imagery
- Authors: Tim Y. Tang, Daniele De Martini, Shangzhe Wu, Paul Newman
- Abstract summary: Publicly available satellite imagery can be an ubiquitous, cheap, and powerful tool for vehicle localisation when a prior sensor map is unavailable.
We present a learned metric localisation method that not only handles the modality difference, but is cheap to train, learning in a self-supervised fashion without metrically accurate ground truth.
- Score: 24.18942374703494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Publicly available satellite imagery can be an ubiquitous, cheap, and
powerful tool for vehicle localisation when a prior sensor map is unavailable.
However, satellite images are not directly comparable to data from ground range
sensors because of their starkly different modalities. We present a learned
metric localisation method that not only handles the modality difference, but
is cheap to train, learning in a self-supervised fashion without metrically
accurate ground truth. By evaluating across multiple real-world datasets, we
demonstrate the robustness and versatility of our method for various sensor
configurations. We pay particular attention to the use of millimetre wave
radar, which, owing to its complex interaction with the scene and its immunity
to weather and lighting, makes for a compelling and valuable use case.
Related papers
- SOAC: Spatio-Temporal Overlap-Aware Multi-Sensor Calibration using Neural Radiance Fields [10.958143040692141]
In rapidly-evolving domains such as autonomous driving, the use of multiple sensors with different modalities is crucial to ensure operational precision and stability.
To correctly exploit the provided information by each sensor in a single common frame, it is essential for these sensors to be accurately calibrated.
We leverage the ability of Neural Radiance Fields to represent different modalities in a common representation.
arXiv Detail & Related papers (2023-11-27T13:25:47Z) - Energy-Based Models for Cross-Modal Localization using Convolutional
Transformers [52.27061799824835]
We present a novel framework for localizing a ground vehicle mounted with a range sensor against satellite imagery in the absence of GPS.
We propose a method using convolutional transformers that performs accurate metric-level localization in a cross-modal manner.
We train our model end-to-end and demonstrate our approach achieving higher accuracy than the state-of-the-art on KITTI, Pandaset, and a custom dataset.
arXiv Detail & Related papers (2023-06-06T21:27:08Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - Learning Online Multi-Sensor Depth Fusion [100.84519175539378]
SenFuNet is a depth fusion approach that learns sensor-specific noise and outlier statistics.
We conduct experiments with various sensor combinations on the real-world CoRBS and Scene3D datasets.
arXiv Detail & Related papers (2022-04-07T10:45:32Z) - Continuous Self-Localization on Aerial Images Using Visual and Lidar
Sensors [25.87104194833264]
We propose a novel method for geo-tracking in outdoor environments by registering a vehicle's sensor information with aerial imagery of an unseen target region.
We train a model in a metric learning setting to extract visual features from ground and aerial images.
Our method is the first to utilize on-board cameras in an end-to-end differentiable model for metric self-localization on unseen orthophotos.
arXiv Detail & Related papers (2022-03-07T12:25:44Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Infrared Beacons for Robust Localization [58.720142291102135]
This paper presents a localization system that uses infrared beacons and a camera equipped with an optical band-pass filter.
Our system can reliably detect and identify individual beacons at 100m distance regardless of lighting conditions.
arXiv Detail & Related papers (2021-04-19T14:23:20Z) - Self-supervised Multisensor Change Detection [14.191073951237772]
We revisit multisensor analysis in context of self-supervised change detection in bi-temporal satellite images.
Recent development of self-supervised learning methods has shown that some of them can even work with only few images.
Motivated by this, in this work we propose a method for multi-sensor change detection using only the unlabeled target bi-temporal images.
arXiv Detail & Related papers (2021-02-12T12:31:10Z) - Cross-Sensor Adversarial Domain Adaptation of Landsat-8 and Proba-V
images for Cloud Detection [1.5828697880068703]
The number of Earth observation satellites carrying optical sensors with similar characteristics is constantly growing.
Differences in retrieved radiances lead to significant drops in accuracy, which hampers knowledge and information sharing across sensors.
We propose a domain adaptation to reduce the statistical differences between images of two satellite sensors in order to boost the performance of transfer learning models.
arXiv Detail & Related papers (2020-06-10T16:16:01Z) - Deep Soft Procrustes for Markerless Volumetric Sensor Alignment [81.13055566952221]
In this work, we improve markerless data-driven correspondence estimation to achieve more robust multi-sensor spatial alignment.
We incorporate geometric constraints in an end-to-end manner into a typical segmentation based model and bridge the intermediate dense classification task with the targeted pose estimation one.
Our model is experimentally shown to achieve similar results with marker-based methods and outperform the markerless ones, while also being robust to the pose variations of the calibration structure.
arXiv Detail & Related papers (2020-03-23T10:51:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.