Bosch Street Dataset: A Multi-Modal Dataset with Imaging Radar for Automated Driving
- URL: http://arxiv.org/abs/2407.12803v1
- Date: Mon, 24 Jun 2024 14:40:56 GMT
- Title: Bosch Street Dataset: A Multi-Modal Dataset with Imaging Radar for Automated Driving
- Authors: Karim Armanious, Maurice Quach, Michael Ulrich, Timo Winterling, Johannes Friesen, Sascha Braun, Daniel Jenet, Yuri Feldman, Eitan Kosman, Philipp Rapp, Volker Fischer, Marc Sons, Lukas Kohns, Daniel Eckstein, Daniela Egbert, Simone Letsch, Corinna Voege, Felix Huttner, Alexander Bartler, Robert Maiwald, Yancong Lin, Ulf Rüegg, Claudius Gläser, Bastian Bischoff, Jascha Freess, Karsten Haug, Kathrin Klee, Holger Caesar,
- Abstract summary: Bosch street dataset (BSD) is a novel multi-modal large-scale dataset aimed at promoting highly automated driving (HAD) and advanced driver-assistance systems (ADAS) research.
BSD offers a unique integration of high-resolution imaging radar, lidar, and camera sensors, providing unprecedented 360-degree coverage.
- Score: 31.425265308487088
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper introduces the Bosch street dataset (BSD), a novel multi-modal large-scale dataset aimed at promoting highly automated driving (HAD) and advanced driver-assistance systems (ADAS) research. Unlike existing datasets, BSD offers a unique integration of high-resolution imaging radar, lidar, and camera sensors, providing unprecedented 360-degree coverage to bridge the current gap in high-resolution radar data availability. Spanning urban, rural, and highway environments, BSD enables detailed exploration into radar-based object detection and sensor fusion techniques. The dataset is aimed at facilitating academic and research collaborations between Bosch and current and future partners. This aims to foster joint efforts in developing cutting-edge HAD and ADAS technologies. The paper describes the dataset's key attributes, including its scalability, radar resolution, and labeling methodology. Key offerings also include initial benchmarks for sensor modalities and a development kit tailored for extensive data analysis and performance evaluation, underscoring our commitment to contributing valuable resources to the HAD and ADAS research community.
Related papers
- SmartPNT-MSF: A Multi-Sensor Fusion Dataset for Positioning and Navigation Research [5.758433879018026]
This dataset integrates data from multiple sensors, including Global Navigation Satellite Systems (GNSS), Inertial Measurement Units (IMU), optical cameras, and LiDAR.<n>A standardized framework for data collection and processing ensures consistency and scalability, enabling large-scale analysis.<n>Covers a wide range of real-world scenarios, including urban areas, campuses, tunnels, and suburban environments.
arXiv Detail & Related papers (2025-07-25T09:06:11Z) - HeRCULES: Heterogeneous Radar Dataset in Complex Urban Environment for Multi-session Radar SLAM [9.462058316827804]
HeRCULES dataset is a comprehensive, multi-modal dataset with heterogeneous radars, FMCW LiDAR, IMU, GPS, and cameras.
This is the first dataset to integrate 4D radar and spinning radar alongside FMCW LiDAR, offering unparalleled localization, mapping, and place recognition capabilities.
arXiv Detail & Related papers (2025-02-04T02:41:00Z) - UdeerLID+: Integrating LiDAR, Image, and Relative Depth with Semi-Supervised [12.440461420762265]
Road segmentation is a critical task for autonomous driving systems.
Our work introduces an innovative approach that integrates LiDAR point cloud data, visual image, and relative depth maps.
One of the primary challenges is the scarcity of large-scale, accurately labeled datasets.
arXiv Detail & Related papers (2024-09-10T03:57:30Z) - UAV (Unmanned Aerial Vehicles): Diverse Applications of UAV Datasets in Segmentation, Classification, Detection, and Tracking [0.0]
Unmanned Aerial Vehicles (UAVs) have revolutionized the process of gathering and analyzing data in diverse research domains.
UAV datasets consist of various types of data, such as satellite imagery, images captured by drones, and videos.
These datasets play a crucial role in disaster damage assessment, aerial surveillance, object recognition, and tracking.
arXiv Detail & Related papers (2024-09-05T04:47:36Z) - MUFASA: Multi-View Fusion and Adaptation Network with Spatial Awareness for Radar Object Detection [3.1212590312985986]
sparsity of radar point clouds poses challenges in achieving precise object detection.
This paper introduces a comprehensive feature extraction method for radar point clouds.
We achieve state-of-the-art results among radar-based methods on the VoD dataset with the mAP of 50.24%.
arXiv Detail & Related papers (2024-08-01T13:52:18Z) - SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection [79.23689506129733]
We establish a new benchmark dataset and an open-source method for large-scale SAR object detection.
Our dataset, SARDet-100K, is a result of intense surveying, collecting, and standardizing 10 existing SAR detection datasets.
To the best of our knowledge, SARDet-100K is the first COCO-level large-scale multi-class SAR object detection dataset ever created.
arXiv Detail & Related papers (2024-03-11T09:20:40Z) - Multi-Modal Dataset Acquisition for Photometrically Challenging Object [56.30027922063559]
This paper addresses the limitations of current datasets for 3D vision tasks in terms of accuracy, size, realism, and suitable imaging modalities for photometrically challenging objects.
We propose a novel annotation and acquisition pipeline that enhances existing 3D perception and 6D object pose datasets.
arXiv Detail & Related papers (2023-08-21T10:38:32Z) - SynDrone -- Multi-modal UAV Dataset for Urban Scenarios [11.338399194998933]
The scarcity of large-scale real datasets with pixel-level annotations poses a significant challenge to researchers.
We propose a multimodal synthetic dataset containing both images and 3D data taken at multiple flying heights.
The dataset will be made publicly available to support the development of novel computer vision methods targeting UAV applications.
arXiv Detail & Related papers (2023-08-21T06:22:10Z) - Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol
Particles for Frontier Exploration [55.41644538483948]
This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles.
It contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format.
The focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data.
arXiv Detail & Related papers (2023-04-27T20:21:18Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Deep learning for radar data exploitation of autonomous vehicle [0.0]
This thesis focuses on the on automotive RADAR, which is a low-cost active sensor measuring properties of surrounding objects.
The RADAR sensor is seldom used for scene understanding due to its poor angular resolution, the size, noise, and complexity of RADAR raw data.
This thesis proposes an extensive study of RADAR scene understanding, from the construction of an annotated dataset to the conception of adapted deep learning architectures.
arXiv Detail & Related papers (2022-03-15T16:19:51Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and
On-Device Inference [49.88536971774444]
Inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services.
We present and release the Oxford Inertial Odometry dataset (OxIOD), a first-of-its-kind public dataset for deep learning based inertial navigation research.
arXiv Detail & Related papers (2020-01-13T04:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.