Semantically Adversarial Scenario Generation with Explicit Knowledge
Guidance
- URL: http://arxiv.org/abs/2106.04066v6
- Date: Thu, 20 Jul 2023 00:24:58 GMT
- Title: Semantically Adversarial Scenario Generation with Explicit Knowledge
Guidance
- Authors: Wenhao Ding, Haohong Lin, Bo Li, Ding Zhao
- Abstract summary: We introduce a method to incorporate domain knowledge explicitly in the generation process to achieve the Semantically Adversarial Generation (SAG)
By imposing semantic rules on the properties of nodes and edges in the tree structure, explicit knowledge integration enables controllable generation.
Our method efficiently identifies adversarial driving scenes against different state-of-the-art 3D point cloud segmentation models.
- Score: 24.09547181095033
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating adversarial scenarios, which have the potential to fail autonomous
driving systems, provides an effective way to improve robustness. Extending
purely data-driven generative models, recent specialized models satisfy
additional controllable requirements such as embedding a traffic sign in a
driving scene by manipulating patterns implicitly in the neuron level. In this
paper, we introduce a method to incorporate domain knowledge explicitly in the
generation process to achieve the Semantically Adversarial Generation (SAG). To
be consistent with the composition of driving scenes, we first categorize the
knowledge into two types, the property of objects and the relationship among
objects. We then propose a tree-structured variational auto-encoder (T-VAE) to
learn hierarchical scene representation. By imposing semantic rules on the
properties of nodes and edges in the tree structure, explicit knowledge
integration enables controllable generation. We construct a synthetic example
to illustrate the controllability and explainability of our method in a
succinct setting. We further extend to realistic environments for autonomous
vehicles: our method efficiently identifies adversarial driving scenes against
different state-of-the-art 3D point cloud segmentation models and satisfies the
traffic rules specified as the explicit knowledge.
Related papers
- On Learning Informative Trajectory Embeddings for Imitation, Classification and Regression [19.01804572722833]
In real-world sequential decision making tasks, learning from observed state-action trajectories is critical for tasks like imitation, classification, and clustering.
We propose a novel method for embedding state-action trajectories into a latent space that captures the skills and competencies in the dynamic underlying decision-making processes.
arXiv Detail & Related papers (2025-01-16T06:52:58Z) - Drive Anywhere: Generalizable End-to-end Autonomous Driving with
Multi-modal Foundation Models [114.69732301904419]
We present an approach to apply end-to-end open-set (any environment/scene) autonomous driving that is capable of providing driving decisions from representations queryable by image and text.
Our approach demonstrates unparalleled results in diverse tests while achieving significantly greater robustness in out-of-distribution situations.
arXiv Detail & Related papers (2023-10-26T17:56:35Z) - Graph-based Topology Reasoning for Driving Scenes [102.35885039110057]
We present TopoNet, the first end-to-end framework capable of abstracting traffic knowledge beyond conventional perception tasks.
We evaluate TopoNet on the challenging scene understanding benchmark, OpenLane-V2.
arXiv Detail & Related papers (2023-04-11T15:23:29Z) - Unsupervised Self-Driving Attention Prediction via Uncertainty Mining
and Knowledge Embedding [51.8579160500354]
We propose an unsupervised way to predict self-driving attention by uncertainty modeling and driving knowledge integration.
Results show equivalent or even more impressive performance compared to fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2023-03-17T00:28:33Z) - Towards Explainable Motion Prediction using Heterogeneous Graph
Representations [3.675875935838632]
Motion prediction systems aim to capture the future behavior of traffic scenarios enabling autonomous vehicles to perform safe and efficient planning.
GNN-based approaches have recently gained attention as they are well suited to naturally model these interactions.
In this work, we aim to improve the explainability of motion prediction systems by using different approaches.
arXiv Detail & Related papers (2022-12-07T17:43:42Z) - Guided Conditional Diffusion for Controllable Traffic Simulation [42.198185904248994]
Controllable and realistic traffic simulation is critical for developing and verifying autonomous vehicles.
Data-driven approaches generate realistic and human-like behaviors, improving transfer from simulated to real-world traffic.
We develop a conditional diffusion model for controllable traffic generation (CTG) that allows users to control desired properties of trajectories at test time.
arXiv Detail & Related papers (2022-10-31T14:44:59Z) - Transferable and Adaptable Driving Behavior Prediction [34.606012573285554]
We propose HATN, a hierarchical framework to generate high-quality, transferable, and adaptable predictions for driving behaviors.
We demonstrate our algorithms in the task of trajectory prediction for real traffic data at intersections and roundabouts from the INTERACTION dataset.
arXiv Detail & Related papers (2022-02-10T16:46:24Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - A Driving Behavior Recognition Model with Bi-LSTM and Multi-Scale CNN [59.57221522897815]
We propose a neural network model based on trajectories information for driving behavior recognition.
We evaluate the proposed model on the public BLVD dataset, achieving a satisfying performance.
arXiv Detail & Related papers (2021-03-01T06:47:29Z) - BoMuDANet: Unsupervised Adaptation for Visual Scene Understanding in
Unstructured Driving Environments [54.22535063244038]
We present an unsupervised adaptation approach for visual scene understanding in unstructured traffic environments.
Our method is designed for unstructured real-world scenarios with dense and heterogeneous traffic consisting of cars, trucks, two-and three-wheelers, and pedestrians.
arXiv Detail & Related papers (2020-09-22T08:25:44Z) - PathGAN: Local Path Planning with Attentive Generative Adversarial
Networks [0.0]
We present a model capable of generating plausible paths from egocentric images for autonomous vehicles.
Our generative model comprises two neural networks: the feature extraction network (FEN) and path generation network (PGN)
We also introduce ETRIDriving, a dataset for autonomous driving in which the recorded sensor data are labeled with discrete high-level driving actions.
arXiv Detail & Related papers (2020-07-08T03:31:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.