The IMPTC Dataset: An Infrastructural Multi-Person Trajectory and
Context Dataset
- URL: http://arxiv.org/abs/2307.06165v1
- Date: Wed, 12 Jul 2023 13:46:20 GMT
- Title: The IMPTC Dataset: An Infrastructural Multi-Person Trajectory and
Context Dataset
- Authors: Manuel Hetzel, Hannes Reichert, G\"unther Reitberger, Erich Fuchs,
Konrad Doll, Bernhard Sick
- Abstract summary: Inner-city intersections are among the most critical traffic areas for injury and fatal accidents.
We use an intelligent public inner-city intersection in Germany with visual sensor technology.
The resulting dataset consists of eight hours of measurement data.
- Score: 4.413278371057897
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Inner-city intersections are among the most critical traffic areas for injury
and fatal accidents. Automated vehicles struggle with the complex and hectic
everyday life within those areas. Sensor-equipped smart infrastructures, which
can cooperate with vehicles, can benefit automated traffic by extending the
perception capabilities of drivers and vehicle perception systems.
Additionally, they offer the opportunity to gather reproducible and precise
data of a holistic scene understanding, including context information as a
basis for training algorithms for various applications in automated traffic.
Therefore, we introduce the Infrastructural Multi-Person Trajectory and Context
Dataset (IMPTC). We use an intelligent public inner-city intersection in
Germany with visual sensor technology. A multi-view camera and LiDAR system
perceives traffic situations and road users' behavior. Additional sensors
monitor contextual information like weather, lighting, and traffic light signal
status. The data acquisition system focuses on Vulnerable Road Users (VRUs) and
multi-agent interaction. The resulting dataset consists of eight hours of
measurement data. It contains over 2,500 VRU trajectories, including
pedestrians, cyclists, e-scooter riders, strollers, and wheelchair users, and
over 20,000 vehicle trajectories at different day times, weather conditions,
and seasons. In addition, to enable the entire stack of research capabilities,
the dataset includes all data, starting from the sensor-, calibration- and
detection data until trajectory and context data. The dataset is continuously
expanded and is available online for non-commercial research at
https://github.com/kav-institute/imptc-dataset.
Related papers
- RoboSense: Large-scale Dataset and Benchmark for Multi-sensor Low-speed Autonomous Driving [62.5830455357187]
In this paper, we construct a multimodal data collection platform based on 3 main types of sensors (Camera, LiDAR and Fisheye)
A large-scale multi-sensor dataset is built, named RoboSense, to facilitate near-field scene understanding.
RoboSense contains more than 133K synchronized data with 1.4M 3D bounding box and IDs in the full $360circ$ view, forming 216K trajectories across 7.6K temporal sequences.
arXiv Detail & Related papers (2024-08-28T03:17:40Z) - SKoPe3D: A Synthetic Dataset for Vehicle Keypoint Perception in 3D from
Traffic Monitoring Cameras [26.457695296042903]
We propose SKoPe3D, a unique synthetic vehicle keypoint dataset from a roadside perspective.
SKoPe3D contains over 150k vehicle instances and 4.9 million keypoints.
Our experiments highlight the dataset's applicability and the potential for knowledge transfer between synthetic and real-world data.
arXiv Detail & Related papers (2023-09-04T02:57:30Z) - Smart Infrastructure: A Research Junction [5.172393727004225]
We introduce an intelligent research infrastructure equipped with visual sensor technology, located at a public inner-city junction in Aschaffenburg, Germany.
A multiple-view camera system monitors the traffic situation to perceive road users' behavior.
The system is used for research in data generation, evaluating new HAD sensors systems, algorithms, and Artificial Intelligence (AI) training strategies.
arXiv Detail & Related papers (2023-07-12T14:04:12Z) - aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving
with Long-Range Perception [0.0]
This dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view.
The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain.
We trained unimodal and multimodal baseline models for 3D object detection.
arXiv Detail & Related papers (2022-11-17T10:19:59Z) - IDD-3D: Indian Driving Dataset for 3D Unstructured Road Scenes [79.18349050238413]
Preparation and training of deploy-able deep learning architectures require the models to be suited to different traffic scenarios.
An unstructured and complex driving layout found in several developing countries such as India poses a challenge to these models.
We build a new dataset, IDD-3D, which consists of multi-modal data from multiple cameras and LiDAR sensors with 12k annotated driving LiDAR frames.
arXiv Detail & Related papers (2022-10-23T23:03:17Z) - Ithaca365: Dataset and Driving Perception under Repeated and Challenging
Weather Conditions [0.0]
We present a new dataset to enable robust autonomous driving via a novel data collection process.
The dataset includes images and point clouds from cameras and LiDAR sensors, along with high-precision GPS/INS.
We demonstrate the uniqueness of this dataset by analyzing the performance of baselines in amodal segmentation of road and objects.
arXiv Detail & Related papers (2022-08-01T22:55:32Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - An Experimental Urban Case Study with Various Data Sources and a Model
for Traffic Estimation [65.28133251370055]
We organize an experimental campaign with video measurement in an area within the urban network of Zurich, Switzerland.
We focus on capturing the traffic state in terms of traffic flow and travel times by ensuring measurements from established thermal cameras.
We propose a simple yet efficient Multiple Linear Regression (MLR) model to estimate travel times with fusion of various data sources.
arXiv Detail & Related papers (2021-08-02T08:13:57Z) - 4Seasons: A Cross-Season Dataset for Multi-Weather SLAM in Autonomous
Driving [48.588254700810474]
We present a novel dataset covering seasonal and challenging perceptual conditions for autonomous driving.
Among others, it enables research on visual odometry, global place recognition, and map-based re-localization tracking.
arXiv Detail & Related papers (2020-09-14T12:31:20Z) - High-Precision Digital Traffic Recording with Multi-LiDAR Infrastructure
Sensor Setups [0.0]
We investigate the impact of fused LiDAR point clouds compared to single LiDAR point clouds.
The evaluation of the extracted trajectories shows that a fused infrastructure approach significantly increases the tracking results and reaches accuracies within a few centimeters.
arXiv Detail & Related papers (2020-06-22T10:57:52Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.