SEAL: Towards Safe Autonomous Driving via Skill-Enabled Adversary Learning for Closed-Loop Scenario Generation
- URL: http://arxiv.org/abs/2409.10320v1
- Date: Mon, 16 Sep 2024 14:33:21 GMT
- Title: SEAL: Towards Safe Autonomous Driving via Skill-Enabled Adversary Learning for Closed-Loop Scenario Generation
- Authors: Benjamin Stoler, Ingrid Navarro, Jonathan Francis, Jean Oh,
- Abstract summary: We propose SEAL, a scenario approach which leverages learned scoring functions and adversarial, human-like skills.
SEAL-perturbed scenarios are more realistic than SOTA baselines, leading to improved ego task success across real-world, in-distribution, and out-of-distribution scenarios.
- Score: 13.008446845017454
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Verification and validation of autonomous driving (AD) systems and components is of increasing importance, as such technology increases in real-world prevalence. Safety-critical scenario generation is a key approach to robustify AD policies through closed-loop training. However, existing approaches for scenario generation rely on simplistic objectives, resulting in overly-aggressive or non-reactive adversarial behaviors. To generate diverse adversarial yet realistic scenarios, we propose SEAL, a scenario perturbation approach which leverages learned scoring functions and adversarial, human-like skills. SEAL-perturbed scenarios are more realistic than SOTA baselines, leading to improved ego task success across real-world, in-distribution, and out-of-distribution scenarios, of more than 20%. To facilitate future research, we release our code and tools: https://github.com/cmubig/SEAL
Related papers
- Multi-UAV Pursuit-Evasion with Online Planning in Unknown Environments by Deep Reinforcement Learning [16.761470423715338]
Multi-UAV pursuit-evasion poses a key challenge for UAV swarm intelligence.
We introduce an evader prediction-enhanced network to tackle partial observability in cooperative strategy learning.
We derive a feasible policy via a two-stage reward refinement and deploy the policy on real quadrotors in a zero-shot manner.
arXiv Detail & Related papers (2024-09-24T08:40:04Z) - ReGentS: Real-World Safety-Critical Driving Scenario Generation Made Stable [88.08120417169971]
Machine learning based autonomous driving systems often face challenges with safety-critical scenarios that are rare in real-world data.
This work explores generating safety-critical driving scenarios by modifying complex real-world regular scenarios through trajectory optimization.
Our approach addresses unrealistic diverging trajectories and unavoidable collision scenarios that are not useful for training robust planner.
arXiv Detail & Related papers (2024-09-12T08:26:33Z) - FREA: Feasibility-Guided Generation of Safety-Critical Scenarios with Reasonable Adversariality [13.240598841087841]
We introduce FREA, a novel safety-critical scenarios generation method that incorporates the Largest Feasible Region (LFR) of AV as guidance.
Experiments illustrate that FREA can effectively generate safety-critical scenarios, yielding considerable near-miss events.
arXiv Detail & Related papers (2024-06-05T06:26:15Z) - SELFI: Autonomous Self-Improvement with Reinforcement Learning for Social Navigation [54.97931304488993]
Self-improving robots that interact and improve with experience are key to the real-world deployment of robotic systems.
We propose an online learning method, SELFI, that leverages online robot experience to rapidly fine-tune pre-trained control policies.
We report improvements in terms of collision avoidance, as well as more socially compliant behavior, measured by a human user study.
arXiv Detail & Related papers (2024-03-01T21:27:03Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Learning Vision-based Pursuit-Evasion Robot Policies [54.52536214251999]
We develop a fully-observable robot policy that generates supervision for a partially-observable one.
We deploy our policy on a physical quadruped robot with an RGB-D camera on pursuit-evasion interactions in the wild.
arXiv Detail & Related papers (2023-08-30T17:59:05Z) - UMBRELLA: Uncertainty-Aware Model-Based Offline Reinforcement Learning
Leveraging Planning [1.1339580074756188]
Offline reinforcement learning (RL) provides a framework for learning decision-making from offline data.
Self-driving vehicles (SDV) learn a policy, which potentially even outperforms the behavior in the sub-optimal data set.
This motivates the use of model-based offline RL approaches, which leverage planning.
arXiv Detail & Related papers (2021-11-22T10:37:52Z) - Safe Active Dynamics Learning and Control: A Sequential
Exploration-Exploitation Framework [30.58186749790728]
We propose a theoretically-justified approach to maintaining safety in the presence of dynamics uncertainty.
Our framework guarantees the high-probability satisfaction of all constraints at all times jointly.
This theoretical analysis also motivates two regularizers of last-layer meta-learning models that improve online adaptation capabilities.
arXiv Detail & Related papers (2020-08-26T17:39:58Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z) - Guided Uncertainty-Aware Policy Optimization: Combining Learning and
Model-Based Strategies for Sample-Efficient Policy Learning [75.56839075060819]
Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.
reinforcement learning approaches can operate directly from raw sensory inputs with only a reward signal to describe the task, but are extremely sample-inefficient and brittle.
In this work, we combine the strengths of model-based methods with the flexibility of learning-based methods to obtain a general method that is able to overcome inaccuracies in the robotics perception/actuation pipeline.
arXiv Detail & Related papers (2020-05-21T19:47:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.