Evaluation of Runtime Monitoring for UAV Emergency Landing
- URL: http://arxiv.org/abs/2202.03059v1
- Date: Mon, 7 Feb 2022 10:51:23 GMT
- Title: Evaluation of Runtime Monitoring for UAV Emergency Landing
- Authors: Joris Guerin, Kevin Delmas, J\'er\'emie Guiochet
- Abstract summary: Emergency Landing (EL) aims at reducing ground risk by finding safe landing areas using on-board sensors.
The proposed EL pipeline includes mechanisms to monitor learning based during execution.
A new evaluation methodology is introduced, and applied to assess the practical safety benefits of three Machine Learning Monitoring mechanisms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To certify UAV operations in populated areas, risk mitigation strategies --
such as Emergency Landing (EL) -- must be in place to account for potential
failures. EL aims at reducing ground risk by finding safe landing areas using
on-board sensors. The first contribution of this paper is to present a new EL
approach, in line with safety requirements introduced in recent research. In
particular, the proposed EL pipeline includes mechanisms to monitor learning
based components during execution. This way, another contribution is to study
the behavior of Machine Learning Runtime Monitoring (MLRM) approaches within
the context of a real-world critical system. A new evaluation methodology is
introduced, and applied to assess the practical safety benefits of three MLRM
mechanisms. The proposed approach is compared to a default mitigation strategy
(open a parachute when a failure is detected), and appears to be much safer.
Related papers
- EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - KnowSafe: Combined Knowledge and Data Driven Hazard Mitigation in
Artificial Pancreas Systems [3.146076597280736]
KnowSafe predicts and mitigates safety hazards resulting from safety-critical malicious attacks or accidental faults targeting a CPS controller.
We integrate domain-specific knowledge of safety constraints and context-specific mitigation actions with machine learning (ML) techniques.
KnowSafe outperforms the state-of-the-art by achieving higher accuracy in predicting system state trajectories and potential hazards.
arXiv Detail & Related papers (2023-11-13T16:43:34Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Co-Design of Out-of-Distribution Detectors for Autonomous Emergency
Braking Systems [4.406331747636832]
Learning enabled components (LECs) make incorrect decisions when presented with samples outside of their training distributions.
Out-of-distribution (OOD) detectors have been proposed to detect such samples, thereby acting as a safety monitor.
We formulate a co-design methodology that uses this risk model to find the design parameters for an OOD detector and LEC that decrease risk below that of the baseline system.
arXiv Detail & Related papers (2023-07-25T11:38:40Z) - Probabilistic Counterexample Guidance for Safer Reinforcement Learning
(Extended Version) [1.279257604152629]
Safe exploration aims at addressing the limitations of Reinforcement Learning (RL) in safety-critical scenarios.
Several methods exist to incorporate external knowledge or to use sensor data to limit the exploration of unsafe states.
In this paper, we target the problem of safe exploration by guiding the training with counterexamples of the safety requirement.
arXiv Detail & Related papers (2023-07-10T22:28:33Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Certifying Emergency Landing for Safe Urban UAV [0.0]
Unmanned Aerial Vehicles (UAVs) have the potential to be used for many applications in urban environments.
One of the main safety issues is the possibility for a failure to cause the loss of navigation capabilities.
Current standards, such as the SORA published in 2019, do not consider applicable mitigation techniques to handle this kind of hazardous situations.
arXiv Detail & Related papers (2021-04-30T11:47:46Z) - Conservative Safety Critics for Exploration [120.73241848565449]
We study the problem of safe exploration in reinforcement learning (RL)
We learn a conservative safety estimate of environment states through a critic.
We show that the proposed approach can achieve competitive task performance while incurring significantly lower catastrophic failure rates.
arXiv Detail & Related papers (2020-10-27T17:54:25Z) - Provably Safe PAC-MDP Exploration Using Analogies [87.41775218021044]
Key challenge in applying reinforcement learning to safety-critical domains is understanding how to balance exploration and safety.
We propose Analogous Safe-state Exploration (ASE), an algorithm for provably safe exploration in MDPs with unknown, dynamics.
Our method exploits analogies between state-action pairs to safely learn a near-optimal policy in a PAC-MDP sense.
arXiv Detail & Related papers (2020-07-07T15:50:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.