Deep Virtual-to-Real Distillation for Pedestrian Crossing Prediction
- URL: http://arxiv.org/abs/2211.00856v1
- Date: Wed, 2 Nov 2022 03:53:55 GMT
- Title: Deep Virtual-to-Real Distillation for Pedestrian Crossing Prediction
- Authors: Jie Bai, Xin Fang, Jianwu Fang, Jianru Xue, and Changwei Yuan
- Abstract summary: We formulate a deep virtual to real distillation framework by introducing the synthetic data that can be generated conveniently.
We borrow the abundant information of pedestrian movement in synthetic videos for the pedestrian crossing prediction in real data with a simple and lightweight implementation.
State-of-the-art performance of this framework is demonstrated by exhaustive experiment analysis.
- Score: 18.17737928566106
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pedestrian crossing is one of the most typical behavior which conflicts with
natural driving behavior of vehicles. Consequently, pedestrian crossing
prediction is one of the primary task that influences the vehicle planning for
safe driving. However, current methods that rely on the practically collected
data in real driving scenes cannot depict and cover all kinds of scene
condition in real traffic world. To this end, we formulate a deep virtual to
real distillation framework by introducing the synthetic data that can be
generated conveniently, and borrow the abundant information of pedestrian
movement in synthetic videos for the pedestrian crossing prediction in real
data with a simple and lightweight implementation. In order to verify this
framework, we construct a benchmark with 4667 virtual videos owning about 745k
frames (called Virtual-PedCross-4667), and evaluate the proposed method on two
challenging datasets collected in real driving situations, i.e., JAAD and PIE
datasets. State-of-the-art performance of this framework is demonstrated by
exhaustive experiment analysis. The dataset and code can be downloaded from the
website \url{http://www.lotvs.net/code_data/}.
Related papers
- DrivingSphere: Building a High-fidelity 4D World for Closed-loop Simulation [54.02069690134526]
We propose DrivingSphere, a realistic and closed-loop simulation framework.
Its core idea is to build 4D world representation and generate real-life and controllable driving scenarios.
By providing a dynamic and realistic simulation environment, DrivingSphere enables comprehensive testing and validation of autonomous driving algorithms.
arXiv Detail & Related papers (2024-11-18T03:00:33Z) - XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
The dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters.
We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - DeepAccident: A Motion and Accident Prediction Benchmark for V2X
Autonomous Driving [76.29141888408265]
We propose a large-scale dataset containing diverse accident scenarios that frequently occur in real-world driving.
The proposed DeepAccident dataset includes 57K annotated frames and 285K annotated samples, approximately 7 times more than the large-scale nuScenes dataset.
arXiv Detail & Related papers (2023-04-03T17:37:00Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Interaction Detection Between Vehicles and Vulnerable Road Users: A Deep
Generative Approach with Attention [9.442285577226606]
We propose a conditional generative model for interaction detection at intersections.
It aims to automatically analyze massive video data about the continuity of road users' behavior.
The model's efficacy was validated by testing on real-world datasets.
arXiv Detail & Related papers (2021-05-09T10:03:55Z) - Learning to Simulate on Sparse Trajectory Data [26.718807213824853]
We present a novel framework ImInGAIL to address the problem of learning to simulate the driving behavior from sparse real-world data.
To the best of our knowledge, we are the first to tackle the data sparsity issue for behavior learning problems.
arXiv Detail & Related papers (2021-03-22T13:42:11Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving [27.948417322786575]
We present a simple yet effective approach to generate realistic scenario sensor data.
Our approach uses texture-mapped surfels to efficiently reconstruct the scene from an initial vehicle pass or set of passes.
We then leverage a SurfelGAN network to reconstruct realistic camera images for novel positions and orientations of the self-driving vehicle.
arXiv Detail & Related papers (2020-05-08T04:01:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.