The Pedestrian Patterns Dataset
- URL: http://arxiv.org/abs/2001.01816v1
- Date: Mon, 6 Jan 2020 23:58:39 GMT
- Title: The Pedestrian Patterns Dataset
- Authors: Kasra Mokhtari and Alan R. Wagner
- Abstract summary: The dataset was collected by repeatedly traversing the same three routes for one week starting at different specific timeslots.
The purpose of the dataset is to capture the patterns of social and pedestrian behavior along the traversed routes at different times.
- Score: 11.193504036335503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the pedestrian patterns dataset for autonomous driving. The
dataset was collected by repeatedly traversing the same three routes for one
week starting at different specific timeslots. The purpose of the dataset is to
capture the patterns of social and pedestrian behavior along the traversed
routes at different times and to eventually use this information to make
predictions about the risk associated with autonomously traveling along
different routes. This dataset contains the Full HD videos and GPS data for
each traversal. Fast R-CNN pedestrian detection method is applied to the
captured videos to count the number of pedestrians at each video frame in order
to assess the density of pedestrians along a route. By providing this
large-scale dataset to researchers, we hope to accelerate autonomous driving
research not only to estimate the risk, both to the public and to the
autonomous vehicle but also accelerate research on long-term vision-based
localization of mobile robots and autonomous vehicles of the future.
Related papers
- Pedestrian Environment Model for Automated Driving [54.16257759472116]
We propose an environment model that includes the position of the pedestrians as well as their pose information.
We extract the skeletal information with a neural network human pose estimator from the image.
To obtain the 3D information of the position, we aggregate the data from consecutive frames in conjunction with the vehicle position.
arXiv Detail & Related papers (2023-08-17T16:10:58Z) - Navigating Uncertainty: The Role of Short-Term Trajectory Prediction in
Autonomous Vehicle Safety [3.3659635625913564]
We have developed a dataset for short-term trajectory prediction tasks using the CARLA simulator.
This dataset is extensive and incorporates what is considered complex scenarios - pedestrians crossing the road, vehicles overtaking.
An end-to-end short-term trajectory prediction model using convolutional neural networks (CNN) and long short-term memory (LSTM) networks has also been developed.
arXiv Detail & Related papers (2023-07-11T14:28:33Z) - aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving
with Long-Range Perception [0.0]
This dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view.
The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain.
We trained unimodal and multimodal baseline models for 3D object detection.
arXiv Detail & Related papers (2022-11-17T10:19:59Z) - Ithaca365: Dataset and Driving Perception under Repeated and Challenging
Weather Conditions [0.0]
We present a new dataset to enable robust autonomous driving via a novel data collection process.
The dataset includes images and point clouds from cameras and LiDAR sensors, along with high-precision GPS/INS.
We demonstrate the uniqueness of this dataset by analyzing the performance of baselines in amodal segmentation of road and objects.
arXiv Detail & Related papers (2022-08-01T22:55:32Z) - Cross-Camera Trajectories Help Person Retrieval in a Camera Network [124.65912458467643]
Existing methods often rely on purely visual matching or consider temporal constraints but ignore the spatial information of the camera network.
We propose a pedestrian retrieval framework based on cross-camera generation, which integrates both temporal and spatial information.
To verify the effectiveness of our method, we construct the first cross-camera pedestrian trajectory dataset.
arXiv Detail & Related papers (2022-04-27T13:10:48Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - ROAD: The ROad event Awareness Dataset for Autonomous Driving [16.24547478826027]
ROAD is designed to test an autonomous vehicle's ability to detect road events.
It comprises 22 videos, annotated with bounding boxes showing the location in the image plane of each road event.
We also provide as baseline a new incremental algorithm for online road event awareness, based on RetinaNet along time.
arXiv Detail & Related papers (2021-02-23T09:48:56Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z) - Universal Embeddings for Spatio-Temporal Tagging of Self-Driving Logs [72.67604044776662]
We tackle the problem of of-temporal tagging of self-driving scenes from raw sensor data.
Our approach learns a universal embedding for all tags, enabling efficient tagging of many attributes and faster learning of new attributes with limited data.
arXiv Detail & Related papers (2020-11-12T02:18:16Z) - Traffic Control Gesture Recognition for Autonomous Vehicles [4.336324036790157]
We introduce a dataset that is based on 3D body skeleton input to perform traffic control gesture classification on every time step.
Our dataset consists of 250 sequences from several actors, ranging from 16 to 90 seconds per sequence.
arXiv Detail & Related papers (2020-07-31T13:40:41Z) - Spatiotemporal Relationship Reasoning for Pedestrian Intent Prediction [57.56466850377598]
Reasoning over visual data is a desirable capability for robotics and vision-based applications.
In this paper, we present a framework on graph to uncover relationships in different objects in the scene for reasoning about pedestrian intent.
Pedestrian intent, defined as the future action of crossing or not-crossing the street, is a very crucial piece of information for autonomous vehicles.
arXiv Detail & Related papers (2020-02-20T18:50:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.