DeepCrashTest: Turning Dashcam Videos into Virtual Crash Tests for
Automated Driving Systems
- URL: http://arxiv.org/abs/2003.11766v1
- Date: Thu, 26 Mar 2020 07:03:45 GMT
- Title: DeepCrashTest: Turning Dashcam Videos into Virtual Crash Tests for
Automated Driving Systems
- Authors: Sai Krishna Bashetty, Heni Ben Amor, Georgios Fainekos
- Abstract summary: We use dashcam crash videos uploaded to the internet to extract valuable collision data.
We tackle the problem of extracting 3D vehicle trajectories from videos recorded by an unknown and uncalibrated monocular camera source.
- Score: 9.17424462858218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of this paper is to generate simulations with real-world collision
scenarios for training and testing autonomous vehicles. We use numerous dashcam
crash videos uploaded on the internet to extract valuable collision data and
recreate the crash scenarios in a simulator. We tackle the problem of
extracting 3D vehicle trajectories from videos recorded by an unknown and
uncalibrated monocular camera source using a modular approach. A working
architecture and demonstration videos along with the open-source implementation
are provided with the paper.
Related papers
- CycleCrash: A Dataset of Bicycle Collision Videos for Collision Prediction and Analysis [21.584020544141797]
CycleCrash is a novel dataset consisting of 3,000 dashcam videos with 436,347 frames that capture cyclists in a range of critical situations.
This dataset enables 9 different cyclist collision prediction and classification tasks focusing on potentially hazardous conditions for cyclists.
We propose VidNeXt, a novel method that leverages a ConvNeXt spatial encoder and a non-stationary transformer to capture the temporal dynamics of videos for the tasks defined in our dataset.
arXiv Detail & Related papers (2024-09-30T04:46:35Z) - Deep Reinforcement Learning for Adverse Garage Scenario Generation [5.482809279542029]
This thesis proposes an automated program generation framework for autonomous driving simulation testing.
Based on deep reinforcement learning, this framework can generate different 2D ground script codes, on which 3D model files and map model files are built.
The generated 3D ground scenes are displayed in the Carla simulator, where experimenters can use this scene for navigation algorithm simulation testing.
arXiv Detail & Related papers (2024-07-01T14:41:18Z) - MagicDrive3D: Controllable 3D Generation for Any-View Rendering in Street Scenes [72.02827211293736]
We introduce MagicDrive3D, a novel pipeline for controllable 3D street scene generation.
Unlike previous methods that reconstruct before training the generative models, MagicDrive3D first trains a video generation model and then reconstructs from the generated data.
Our results demonstrate the framework's superior performance, showcasing its transformative potential for autonomous driving simulation and beyond.
arXiv Detail & Related papers (2024-05-23T12:04:51Z) - Learning 3D Particle-based Simulators from RGB-D Videos [15.683877597215494]
We propose a method for learning simulators directly from observations.
Visual Particle Dynamics (VPD) jointly learns a latent particle-based representation of 3D scenes.
Unlike existing 2D video prediction models, VPD's 3D structure enables scene editing and long-term predictions.
arXiv Detail & Related papers (2023-12-08T20:45:34Z) - Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving
Without Real Data [56.49494318285391]
We present Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving.
This is done by learning to translate randomized simulation images into simulated segmentation and depth maps.
This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world.
arXiv Detail & Related papers (2022-10-25T17:50:36Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - Monocular 3D Vehicle Detection Using Uncalibrated Traffic Cameras
through Homography [12.062095895630563]
This paper proposes a method to extract the position and pose of vehicles in the 3D world from a single traffic camera.
We observe that the homography between the road plane and the image plane is essential to 3D vehicle detection.
We propose a new regression target called textittailedr-box and a textitdual-view network architecture which boosts the detection accuracy on warped BEV images.
arXiv Detail & Related papers (2021-03-29T02:57:37Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - Edge Computing for Real-Time Near-Crash Detection for Smart
Transportation Applications [29.550609157368466]
Traffic near-crash events serve as critical data sources for various smart transportation applications.
This paper leverages the power of edge computing to address these challenges by processing the video streams from existing dashcams onboard in a real-time manner.
It is among the first efforts in applying edge computing for real-time traffic video analytics and is expected to benefit multiple sub-fields in smart transportation research and applications.
arXiv Detail & Related papers (2020-08-02T19:39:14Z) - LiDARsim: Realistic LiDAR Simulation by Leveraging the Real World [84.57894492587053]
We develop a novel simulator that captures both the power of physics-based and learning-based simulation.
We first utilize ray casting over the 3D scene and then use a deep neural network to produce deviations from the physics-based simulation.
We showcase LiDARsim's usefulness for perception algorithms-testing on long-tail events and end-to-end closed-loop evaluation on safety-critical scenarios.
arXiv Detail & Related papers (2020-06-16T17:44:35Z) - SimAug: Learning Robust Representations from Simulation for Trajectory
Prediction [78.91518036949918]
We propose a novel approach to learn robust representation through augmenting the simulation training data.
We show that SimAug achieves promising results on three real-world benchmarks using zero real training data.
arXiv Detail & Related papers (2020-04-04T21:22:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.