Adv3D: Generating Safety-Critical 3D Objects through Closed-Loop
Simulation
- URL: http://arxiv.org/abs/2311.01446v1
- Date: Thu, 2 Nov 2023 17:56:44 GMT
- Title: Adv3D: Generating Safety-Critical 3D Objects through Closed-Loop
Simulation
- Authors: Jay Sarva, Jingkang Wang, James Tu, Yuwen Xiong, Sivabalan
Manivasagam, Raquel Urtasun
- Abstract summary: Self-driving vehicles (SDVs) must be rigorously tested on a wide range of scenarios to ensure safe deployment.
The industry typically relies on closed-loop simulation to evaluate how the SDV interacts on a corpus of synthetic and real scenarios.
It is key to evaluate the full autonomy system in closed-loop, and to understand how variations in sensor data based on scene appearance, such as the shape of actors, affect system performance.
- Score: 41.24263342397263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-driving vehicles (SDVs) must be rigorously tested on a wide range of
scenarios to ensure safe deployment. The industry typically relies on
closed-loop simulation to evaluate how the SDV interacts on a corpus of
synthetic and real scenarios and verify it performs properly. However, they
primarily only test the system's motion planning module, and only consider
behavior variations. It is key to evaluate the full autonomy system in
closed-loop, and to understand how variations in sensor data based on scene
appearance, such as the shape of actors, affect system performance. In this
paper, we propose a framework, Adv3D, that takes real world scenarios and
performs closed-loop sensor simulation to evaluate autonomy performance, and
finds vehicle shapes that make the scenario more challenging, resulting in
autonomy failures and uncomfortable SDV maneuvers. Unlike prior works that add
contrived adversarial shapes to vehicle roof-tops or roadside to harm
perception only, we optimize a low-dimensional shape representation to modify
the vehicle shape itself in a realistic manner to degrade autonomy performance
(e.g., perception, prediction, and motion planning). Moreover, we find that the
shape variations found with Adv3D optimized in closed-loop are much more
effective than those in open-loop, demonstrating the importance of finding
scene appearance variations that affect autonomy in the interactive setting.
Related papers
- RESCUE: Crowd Evacuation Simulation via Controlling SDM-United Characters [48.356346584588906]
Current evacuation models overlook the complex human behaviors that occur during evacuation.<n>We propose a real-time 3D crowd evacuation simulation framework that integrates a 3D-adaptive SFM (Social Force Model) Decision Mechanism and a Personalized Gait Control Motor.
arXiv Detail & Related papers (2025-07-27T03:50:18Z) - GeoDrive: 3D Geometry-Informed Driving World Model with Precise Action Control [50.67481583744243]
We introduce GeoDrive, which explicitly integrates robust 3D geometry conditions into driving world models.<n>We propose a dynamic editing module during training to enhance the renderings by editing the positions of the vehicles.<n>Our method significantly outperforms existing models in both action accuracy and 3D spatial awareness.
arXiv Detail & Related papers (2025-05-28T14:46:51Z) - RAD: Retrieval-Augmented Decision-Making of Meta-Actions with Vision-Language Models in Autonomous Driving [10.984203470464687]
Vision-language models (VLMs) often suffer from limitations such as inadequate spatial perception and hallucination.
We propose a retrieval-augmented decision-making (RAD) framework to enhance VLMs' capabilities to reliably generate meta-actions in autonomous driving scenes.
We fine-tune VLMs on a dataset derived from the NuScenes dataset to enhance their spatial perception and bird's-eye view image comprehension capabilities.
arXiv Detail & Related papers (2025-03-18T03:25:57Z) - DrivingSphere: Building a High-fidelity 4D World for Closed-loop Simulation [54.02069690134526]
We propose DrivingSphere, a realistic and closed-loop simulation framework.
Its core idea is to build 4D world representation and generate real-life and controllable driving scenarios.
By providing a dynamic and realistic simulation environment, DrivingSphere enables comprehensive testing and validation of autonomous driving algorithms.
arXiv Detail & Related papers (2024-11-18T03:00:33Z) - DiFSD: Ego-Centric Fully Sparse Paradigm with Uncertainty Denoising and Iterative Refinement for Efficient End-to-End Self-Driving [55.53171248839489]
We propose an ego-centric fully sparse paradigm, named DiFSD, for end-to-end self-driving.
Specifically, DiFSD mainly consists of sparse perception, hierarchical interaction and iterative motion planner.
Experiments conducted on nuScenes and Bench2Drive datasets demonstrate the superior planning performance and great efficiency of DiFSD.
arXiv Detail & Related papers (2024-09-15T15:55:24Z) - CADSim: Robust and Scalable in-the-wild 3D Reconstruction for
Controllable Sensor Simulation [44.83732884335725]
Sensor simulation involves modeling traffic participants, such as vehicles, with high quality appearance and articulated geometry.
Current reconstruction approaches struggle on in-the-wild sensor data, due to its sparsity and noise.
We present CADSim, which combines part-aware object-class priors via a small set of CAD models with differentiable rendering to automatically reconstruct vehicle geometry.
arXiv Detail & Related papers (2023-11-02T17:56:59Z) - UniSim: A Neural Closed-Loop Sensor Simulator [76.79818601389992]
We present UniSim, a neural sensor simulator that takes a single recorded log captured by a sensor-equipped vehicle.
UniSim builds neural feature grids to reconstruct both the static background and dynamic actors in the scene.
We incorporate learnable priors for dynamic objects, and leverage a convolutional network to complete unseen regions.
arXiv Detail & Related papers (2023-08-03T17:56:06Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Recovering and Simulating Pedestrians in the Wild [81.38135735146015]
We propose to recover the shape and motion of pedestrians from sensor readings captured in the wild by a self-driving car driving around.
We incorporate the reconstructed pedestrian assets bank in a realistic 3D simulation system.
We show that the simulated LiDAR data can be used to significantly reduce the amount of real-world data required for visual perception tasks.
arXiv Detail & Related papers (2020-11-16T17:16:32Z) - Modeling Perception Errors towards Robust Decision Making in Autonomous
Vehicles [11.503090828741191]
We propose a simulation-based methodology towards answering the question: is a perception subsystem sufficient for the decision making subsystem to make robust, safe decisions?
We show how to analyze the impact of different kinds of sensing and perception errors on the behavior of the autonomous system.
arXiv Detail & Related papers (2020-01-31T08:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.