A Large Scale Event-based Detection Dataset for Automotive
- URL: http://arxiv.org/abs/2001.08499v3
- Date: Fri, 31 Jan 2020 13:35:45 GMT
- Title: A Large Scale Event-based Detection Dataset for Automotive
- Authors: Pierre de Tournemire, Davide Nitti, Etienne Perot, Davide Migliore,
Amos Sironi
- Abstract summary: The dataset is composed of more than 39 hours of automotive recordings with a 304x240 ATIS sensor.
It contains open roads and very diverse driving scenarios, ranging from urban, highway, suburbs and countryside scenes, as well as different weather and illumination conditions.
We believe that the availability of a labeled dataset of this size will contribute to major advances in event-based vision tasks such as object detection and classification.
- Score: 8.230195220924234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the first very large detection dataset for event cameras. The
dataset is composed of more than 39 hours of automotive recordings acquired
with a 304x240 ATIS sensor. It contains open roads and very diverse driving
scenarios, ranging from urban, highway, suburbs and countryside scenes, as well
as different weather and illumination conditions. Manual bounding box
annotations of cars and pedestrians contained in the recordings are also
provided at a frequency between 1 and 4Hz, yielding more than 255,000 labels in
total. We believe that the availability of a labeled dataset of this size will
contribute to major advances in event-based vision tasks such as object
detection and classification. We also expect benefits in other tasks such as
optical flow, structure from motion and tracking, where for example, the large
amount of data can be leveraged by self-supervised learning methods.
Related papers
- DailyDVS-200: A Comprehensive Benchmark Dataset for Event-Based Action Recognition [51.96660522869841]
DailyDVS-200 is a benchmark dataset tailored for the event-based action recognition community.
It covers 200 action categories across real-world scenarios, recorded by 47 participants, and comprises more than 22,000 event sequences.
DailyDVS-200 is annotated with 14 attributes, ensuring a detailed characterization of the recorded actions.
arXiv Detail & Related papers (2024-07-06T15:25:10Z) - SKoPe3D: A Synthetic Dataset for Vehicle Keypoint Perception in 3D from
Traffic Monitoring Cameras [26.457695296042903]
We propose SKoPe3D, a unique synthetic vehicle keypoint dataset from a roadside perspective.
SKoPe3D contains over 150k vehicle instances and 4.9 million keypoints.
Our experiments highlight the dataset's applicability and the potential for knowledge transfer between synthetic and real-world data.
arXiv Detail & Related papers (2023-09-04T02:57:30Z) - The Interstate-24 3D Dataset: a new benchmark for 3D multi-camera
vehicle tracking [4.799822253865053]
This work presents a novel video dataset recorded from overlapping highway traffic cameras along an urban interstate, enabling multi-camera 3D object tracking in a traffic monitoring context.
Data is released from 3 scenes containing video from at least 16 cameras each, totaling 57 minutes in length.
877,000 3D bounding boxes and corresponding object tracklets are fully and accurately annotated for each camera field of view and are combined into a spatially and temporally continuous set of vehicle trajectories for each scene.
arXiv Detail & Related papers (2023-08-28T18:43:33Z) - DeepAccident: A Motion and Accident Prediction Benchmark for V2X
Autonomous Driving [76.29141888408265]
We propose a large-scale dataset containing diverse accident scenarios that frequently occur in real-world driving.
The proposed DeepAccident dataset includes 57K annotated frames and 285K annotated samples, approximately 7 times more than the large-scale nuScenes dataset.
arXiv Detail & Related papers (2023-04-03T17:37:00Z) - Traffic Scene Parsing through the TSP6K Dataset [109.69836680564616]
We introduce a specialized traffic monitoring dataset, termed TSP6K, with high-quality pixel-level and instance-level annotations.
The dataset captures more crowded traffic scenes with several times more traffic participants than the existing driving scenes.
We propose a detail refining decoder for scene parsing, which recovers the details of different semantic regions in traffic scenes.
arXiv Detail & Related papers (2023-03-06T02:05:14Z) - Recurrent Vision Transformers for Object Detection with Event Cameras [62.27246562304705]
We present Recurrent Vision Transformers (RVTs), a novel backbone for object detection with event cameras.
RVTs can be trained from scratch to reach state-of-the-art performance on event-based object detection.
Our study brings new insights into effective design choices that can be fruitful for research beyond event-based vision.
arXiv Detail & Related papers (2022-12-11T20:28:59Z) - TrafficCAM: A Versatile Dataset for Traffic Flow Segmentation [9.744937939618161]
Existing traffic flow datasets have two major limitations.
They feature a limited number of classes, usually limited to one type of vehicle, and the scarcity of unlabelled data.
We introduce a new benchmark traffic flow image dataset called TrafficCAM.
arXiv Detail & Related papers (2022-11-17T16:14:38Z) - Synthehicle: Multi-Vehicle Multi-Camera Tracking in Virtual Cities [4.4855664250147465]
We present a massive synthetic dataset for multiple vehicle tracking and segmentation in multiple overlapping and non-overlapping camera views.
The dataset consists of 17 hours of labeled video material, recorded from 340 cameras in 64 diverse day, rain, dawn, and night scenes.
arXiv Detail & Related papers (2022-08-30T11:36:07Z) - TAO: A Large-Scale Benchmark for Tracking Any Object [95.87310116010185]
Tracking Any Object dataset consists of 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average.
We ask annotators to label objects that move at any point in the video, and give names to them post factum.
Our vocabulary is both significantly larger and qualitatively different from existing tracking datasets.
arXiv Detail & Related papers (2020-05-20T21:07:28Z) - Training-free Monocular 3D Event Detection System for Traffic
Surveillance [93.65240041833319]
Existing event detection systems are mostly learning-based and have achieved convincing performance when a large amount of training data is available.
In real-world scenarios, collecting sufficient labeled training data is expensive and sometimes impossible.
We propose a training-free monocular 3D event detection system for traffic surveillance.
arXiv Detail & Related papers (2020-02-01T04:42:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.