Scalable Learning of Safety Guarantees for Autonomous Systems using
Hamilton-Jacobi Reachability
- URL: http://arxiv.org/abs/2101.05916v1
- Date: Fri, 15 Jan 2021 00:13:01 GMT
- Title: Scalable Learning of Safety Guarantees for Autonomous Systems using
Hamilton-Jacobi Reachability
- Authors: Sylvia Herbert, Jason J. Choi, Suvansh Qazi, Marsalis Gibson, Koushil
Sreenath, Claire J. Tomlin
- Abstract summary: Methods like Hamilton-Jacobi reachability can provide guaranteed safe sets and controllers for such systems.
As the system is operating, it may learn new knowledge about these uncertainties and should therefore update its safety analysis accordingly.
In this paper we synthesize several techniques to speed up computation: decomposition, warm-starting, and adaptive grids.
- Score: 18.464688553299663
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous systems like aircraft and assistive robots often operate in
scenarios where guaranteeing safety is critical. Methods like Hamilton-Jacobi
reachability can provide guaranteed safe sets and controllers for such systems.
However, often these same scenarios have unknown or uncertain environments,
system dynamics, or predictions of other agents. As the system is operating, it
may learn new knowledge about these uncertainties and should therefore update
its safety analysis accordingly. However, work to learn and update safety
analysis is limited to small systems of about two dimensions due to the
computational complexity of the analysis. In this paper we synthesize several
techniques to speed up computation: decomposition, warm-starting, and adaptive
grids. Using this new framework we can update safe sets by one or more orders
of magnitude faster than prior work, making this technique practical for many
realistic systems. We demonstrate our results on simulated 2D and 10D
near-hover quadcopters operating in a windy environment.
Related papers
- In-Context Experience Replay Facilitates Safety Red-Teaming of Text-to-Image Diffusion Models [97.82118821263825]
Text-to-image (T2I) models have shown remarkable progress, but their potential to generate harmful content remains a critical concern in the ML community.
We propose ICER, a novel red-teaming framework that generates interpretable and semantic meaningful problematic prompts.
Our work provides crucial insights for developing more robust safety mechanisms in T2I systems.
arXiv Detail & Related papers (2024-11-25T04:17:24Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Guided Safe Shooting: model based reinforcement learning with safety constraints [3.8490154494129327]
We introduce Guided Safe Shooting (GuSS), a model-based RL approach that can learn to control systems with minimal violations of the safety constraints.
We propose three different safe planners, one based on a simple random shooting strategy and two based on MAP-Elites, a more advanced divergent-search algorithm.
arXiv Detail & Related papers (2022-06-20T12:46:35Z) - An Empirical Analysis of the Use of Real-Time Reachability for the
Safety Assurance of Autonomous Vehicles [7.1169864450668845]
We propose using a real-time reachability algorithm for the implementation of the simplex architecture to assure the safety of a 1/10 scale open source autonomous vehicle platform.
In our approach, the need to analyze an underlying controller is abstracted away, instead focusing on the effects of the controller's decisions on the system's future states.
arXiv Detail & Related papers (2022-05-03T11:12:29Z) - Safety-aware Policy Optimisation for Autonomous Racing [17.10371721305536]
We introduce Hamilton-Jacobi (HJ) reachability theory into the constrained Markov decision process (CMDP) framework.
We demonstrate that the HJ safety value can be learned directly on vision context.
We evaluate our method on several benchmark tasks, including Safety Gym and Learn-to-Race (L2R), a recently-released high-fidelity autonomous racing environment.
arXiv Detail & Related papers (2021-10-14T20:15:45Z) - Safely Learning Dynamical Systems from Short Trajectories [12.184674552836414]
A fundamental challenge in learning to control an unknown dynamical system is to reduce model uncertainty by making measurements while maintaining safety.
We formulate a mathematical definition of what it means to safely learn a dynamical system by sequentially deciding where to initialize the next trajectory.
We present a linear programming-based algorithm that either safely recovers the true dynamics from trajectories of length one, or certifies that safe learning is impossible.
arXiv Detail & Related papers (2020-11-24T18:06:10Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.