4Seasons: Benchmarking Visual SLAM and Long-Term Localization for
Autonomous Driving in Challenging Conditions
- URL: http://arxiv.org/abs/2301.01147v1
- Date: Sat, 31 Dec 2022 13:52:36 GMT
- Title: 4Seasons: Benchmarking Visual SLAM and Long-Term Localization for
Autonomous Driving in Challenging Conditions
- Authors: Patrick Wenzel, Nan Yang, Rui Wang, Niclas Zeller, Daniel Cremers
- Abstract summary: We present a novel visual SLAM and long-term localization benchmark for autonomous driving in challenging conditions based on the large-scale 4Seasons dataset.
The proposed benchmark provides drastic appearance variations caused by seasonal changes and diverse weather and illumination conditions.
We introduce a new unified benchmark for jointly evaluating visual odometry, global place recognition, and map-based visual localization performance.
- Score: 54.59279160621111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a novel visual SLAM and long-term localization
benchmark for autonomous driving in challenging conditions based on the
large-scale 4Seasons dataset. The proposed benchmark provides drastic
appearance variations caused by seasonal changes and diverse weather and
illumination conditions. While significant progress has been made in advancing
visual SLAM on small-scale datasets with similar conditions, there is still a
lack of unified benchmarks representative of real-world scenarios for
autonomous driving. We introduce a new unified benchmark for jointly evaluating
visual odometry, global place recognition, and map-based visual localization
performance which is crucial to successfully enable autonomous driving in any
condition. The data has been collected for more than one year, resulting in
more than 300 km of recordings in nine different environments ranging from a
multi-level parking garage to urban (including tunnels) to countryside and
highway. We provide globally consistent reference poses with up to
centimeter-level accuracy obtained from the fusion of direct stereo-inertial
odometry with RTK GNSS. We evaluate the performance of several state-of-the-art
visual odometry and visual localization baseline approaches on the benchmark
and analyze their properties. The experimental results provide new insights
into current approaches and show promising potential for future research. Our
benchmark and evaluation protocols will be available at
https://www.4seasons-dataset.com/.
Related papers
- XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
The dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters.
We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking [65.24988062003096]
We present NAVSIM, a framework for benchmarking vision-based driving policies.
Our simulation is non-reactive, i.e., the evaluated policy and environment do not influence each other.
NAVSIM enabled a new competition held at CVPR 2024, where 143 teams submitted 463 entries, resulting in several new insights.
arXiv Detail & Related papers (2024-06-21T17:59:02Z) - Is Ego Status All You Need for Open-Loop End-to-End Autonomous Driving? [84.17711168595311]
End-to-end autonomous driving has emerged as a promising research direction to target autonomy from a full-stack perspective.
nuScenes dataset, characterized by relatively simple driving scenarios, leads to an under-utilization of perception information in end-to-end models.
We introduce a new metric to evaluate whether the predicted trajectories adhere to the road.
arXiv Detail & Related papers (2023-12-05T11:32:31Z) - What you see is what you get: Experience ranking with deep neural
dataset-to-dataset similarity for topological localisation [19.000718685399935]
We propose applying the recently developed Visual DNA as a highly scalable tool for comparing datasets of images.
In the case of localisation, important dataset differences impacting performance are modes of appearance change, including weather, lighting, and season.
We find that differences in these statistics correlate to performance when localising using a past experience with the same appearance gap.
arXiv Detail & Related papers (2023-10-20T16:13:21Z) - CrowdDriven: A New Challenging Dataset for Outdoor Visual Localization [44.97567243883994]
We propose a new benchmark for visual localization in outdoor scenes using crowd-sourced data.
We show that our dataset is very challenging, with all evaluated methods failing on its hardest parts.
As part of the dataset release, we provide the tooling used to generate it, enabling efficient and effective 2D correspondence annotation.
arXiv Detail & Related papers (2021-09-09T19:25:48Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - 4Seasons: A Cross-Season Dataset for Multi-Weather SLAM in Autonomous
Driving [48.588254700810474]
We present a novel dataset covering seasonal and challenging perceptual conditions for autonomous driving.
Among others, it enables research on visual odometry, global place recognition, and map-based re-localization tracking.
arXiv Detail & Related papers (2020-09-14T12:31:20Z) - Real-time Kinematic Ground Truth for the Oxford RobotCar Dataset [23.75606166843614]
We release reference data towards a challenging long-term localisation and mapping benchmark based on the large-scale Oxford RobotCar dataset.
We have produced a globally-consistent centimetre-accurate ground truth for the entire year-long duration of the dataset.
arXiv Detail & Related papers (2020-02-24T10:34:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.