SCOPE: A Synthetic Multi-Modal Dataset for Collective Perception Including Physical-Correct Weather Conditions
- URL: http://arxiv.org/abs/2408.03065v1
- Date: Tue, 6 Aug 2024 09:35:50 GMT
- Title: SCOPE: A Synthetic Multi-Modal Dataset for Collective Perception Including Physical-Correct Weather Conditions
- Authors: Jörg Gamerdinger, Sven Teufel, Patrick Schulz, Stephan Amann, Jan-Patrick Kirchner, Oliver Bringmann,
- Abstract summary: SCOPE is the first synthetic multi-modal dataset that incorporates realistic camera and LiDAR models as well as parameterized and physically accurate weather simulations.
The dataset contains 17,600 frames from over 40 diverse scenarios with up to 24 collaborative agents, infrastructure sensors, and passive traffic, including cyclists and pedestrians.
- Score: 0.5026434955540995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collective perception has received considerable attention as a promising approach to overcome occlusions and limited sensing ranges of vehicle-local perception in autonomous driving. In order to develop and test novel collective perception technologies, appropriate datasets are required. These datasets must include not only different environmental conditions, as they strongly influence the perception capabilities, but also a wide range of scenarios with different road users as well as realistic sensor models. Therefore, we propose the Synthetic COllective PErception (SCOPE) dataset. SCOPE is the first synthetic multi-modal dataset that incorporates realistic camera and LiDAR models as well as parameterized and physically accurate weather simulations for both sensor types. The dataset contains 17,600 frames from over 40 diverse scenarios with up to 24 collaborative agents, infrastructure sensors, and passive traffic, including cyclists and pedestrians. In addition, recordings from two novel digital-twin maps from Karlsruhe and T\"ubingen are included. The dataset is available at https://ekut-es.github.io/scope
Related papers
- DrivingSphere: Building a High-fidelity 4D World for Closed-loop Simulation [54.02069690134526]
We propose DrivingSphere, a realistic and closed-loop simulation framework.
Its core idea is to build 4D world representation and generate real-life and controllable driving scenarios.
By providing a dynamic and realistic simulation environment, DrivingSphere enables comprehensive testing and validation of autonomous driving algorithms.
arXiv Detail & Related papers (2024-11-18T03:00:33Z) - Digital twins to alleviate the need for real field data in vision-based vehicle speed detection systems [0.9899633398596672]
Accurate vision-based speed estimation is more cost-effective than traditional methods based on radar or LiDAR.
Deep learning approaches are very limited in this context due to the lack of available data.
In this work, we propose the use of digital-twins using CARLA simulator to generate a large dataset representative of a specific real-world camera.
arXiv Detail & Related papers (2024-07-11T10:41:20Z) - XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
The dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters.
We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - SCaRL- A Synthetic Multi-Modal Dataset for Autonomous Driving [0.0]
We present a novel synthetically generated multi-modal dataset, SCaRL, to enable the training and validation of autonomous driving solutions.
SCaRL is a large dataset based on the CARLA Simulator, which provides data for diverse, dynamic scenarios and traffic conditions.
arXiv Detail & Related papers (2024-05-27T10:31:26Z) - SEVD: Synthetic Event-based Vision Dataset for Ego and Fixed Traffic Perception [22.114089372056238]
We present SEVD, a first-of-its-kind multi-view ego, and fixed perception synthetic event-based dataset.
SEVD spans urban, suburban, rural, and highway scenes featuring various classes of objects.
We evaluate the dataset using state-of-the-art event-based (RED, RVT) and frame-based (YOLOv8) methods for traffic participant detection tasks.
arXiv Detail & Related papers (2024-04-12T20:40:12Z) - Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol
Particles for Frontier Exploration [55.41644538483948]
This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles.
It contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format.
The focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data.
arXiv Detail & Related papers (2023-04-27T20:21:18Z) - TRoVE: Transforming Road Scene Datasets into Photorealistic Virtual
Environments [84.6017003787244]
This work proposes a synthetic data generation pipeline to address the difficulties and domain-gaps present in simulated datasets.
We show that using annotations and visual cues from existing datasets, we can facilitate automated multi-modal data generation.
arXiv Detail & Related papers (2022-08-16T20:46:08Z) - SELMA: SEmantic Large-scale Multimodal Acquisitions in Variable Weather,
Daytime and Viewpoints [36.57734409668748]
We introduce SELMA, a novel dataset for semantic segmentation.
It contains more than 30K unique waypoints acquired from 24 different sensors including RGB, depth, semantic cameras and LiDARs.
It is based on CARLA, an open-source simulator for generating synthetic data in autonomous driving scenarios.
arXiv Detail & Related papers (2022-04-20T21:22:56Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - High-Precision Digital Traffic Recording with Multi-LiDAR Infrastructure
Sensor Setups [0.0]
We investigate the impact of fused LiDAR point clouds compared to single LiDAR point clouds.
The evaluation of the extracted trajectories shows that a fused infrastructure approach significantly increases the tracking results and reaches accuracies within a few centimeters.
arXiv Detail & Related papers (2020-06-22T10:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.