Relating System Safety and Machine Learnt Model Performance
- URL: http://arxiv.org/abs/2507.20135v1
- Date: Sun, 27 Jul 2025 05:39:54 GMT
- Title: Relating System Safety and Machine Learnt Model Performance
- Authors: Ganesh Pai,
- Abstract summary: This paper describes an aircraft emergency braking system containing a machine learnt component (MLC) responsible for object detection and alerting.<n>An initial method is given to derive the minimum safety-related performance requirements, the associated metrics, and their targets for the both MLC and its underlying deep neural network.<n>We give rationale as to why the proposed method should be considered valid, also clarifying the assumptions made, the constraints on applicability, and the implications for verification.
- Score: 0.9790236766474201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prediction quality of machine learnt models and the functionality they ultimately enable (e.g., object detection), is typically evaluated using a variety of quantitative metrics that are specified in the associated model performance requirements. When integrating such models into aeronautical applications, a top-down safety assessment process must influence both the model performance metrics selected, and their acceptable range of values. Often, however, the relationship of system safety objectives to model performance requirements and the associated metrics is unclear. Using an example of an aircraft emergency braking system containing a machine learnt component (MLC) responsible for object detection and alerting, this paper first describes a simple abstraction of the required MLC behavior. Then, based on that abstraction, an initial method is given to derive the minimum safety-related performance requirements, the associated metrics, and their targets for the both MLC and its underlying deep neural network, such that they meet the quantitative safety objectives obtained from the safety assessment process. We give rationale as to why the proposed method should be considered valid, also clarifying the assumptions made, the constraints on applicability, and the implications for verification.
Related papers
- Incorporating Failure of Machine Learning in Dynamic Probabilistic Safety Assurance [0.1398098625978622]
We introduce a safety assurance framework that integrates SafeML with Bayesian Networks (BNs) to model ML failures as part of a broader causal safety analysis.<n>We demonstrate the approach on a simulated automotive platooning system with traffic sign recognition.
arXiv Detail & Related papers (2025-06-07T17:16:05Z) - Advancing Embodied Agent Security: From Safety Benchmarks to Input Moderation [52.83870601473094]
Embodied agents exhibit immense potential across a multitude of domains.<n>Existing research predominantly concentrates on the security of general large language models.<n>This paper introduces a novel input moderation framework, meticulously designed to safeguard embodied agents.
arXiv Detail & Related papers (2025-04-22T08:34:35Z) - Benchmarks as Microscopes: A Call for Model Metrology [76.64402390208576]
Modern language models (LMs) pose a new challenge in capability assessment.
To be confident in our metrics, we need a new discipline of model metrology.
arXiv Detail & Related papers (2024-07-22T17:52:12Z) - What Makes and Breaks Safety Fine-tuning? A Mechanistic Study [64.9691741899956]
Safety fine-tuning helps align Large Language Models (LLMs) with human preferences for their safe deployment.
We design a synthetic data generation framework that captures salient aspects of an unsafe input.
Using this, we investigate three well-known safety fine-tuning methods.
arXiv Detail & Related papers (2024-07-14T16:12:57Z) - Exploring validation metrics for offline model-based optimisation with
diffusion models [50.404829846182764]
In model-based optimisation (MBO) we are interested in using machine learning to design candidates that maximise some measure of reward with respect to a black box function called the (ground truth) oracle.
While an approximation to the ground oracle can be trained and used in place of it during model validation to measure the mean reward over generated candidates, the evaluation is approximate and vulnerable to adversarial examples.
This is encapsulated under our proposed evaluation framework which is also designed to measure extrapolation.
arXiv Detail & Related papers (2022-11-19T16:57:37Z) - USC: Uncompromising Spatial Constraints for Safety-Oriented 3D Object Detectors in Autonomous Driving [7.355977594790584]
We consider the safety-oriented performance of 3D object detectors in autonomous driving contexts.<n>We present uncompromising spatial constraints (USC), which characterize a simple yet important localization requirement.<n>We incorporate the quantitative measures into common loss functions to enable safety-oriented fine-tuning for existing models.
arXiv Detail & Related papers (2022-09-21T14:03:08Z) - Unifying Evaluation of Machine Learning Safety Monitors [0.0]
runtime monitors have been developed to detect prediction errors and keep the system in a safe state during operations.
This paper introduces three unified safety-oriented metrics, representing the safety benefits of the monitor (Safety Gain) and the remaining safety gaps after using it (Residual Hazard)
Three use-cases (classification, drone landing, and autonomous driving) are used to demonstrate how metrics from the literature can be expressed in terms of the proposed metrics.
arXiv Detail & Related papers (2022-08-31T07:17:42Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Reliability Assessment and Safety Arguments for Machine Learning
Components in Assuring Learning-Enabled Autonomous Systems [19.65793237440738]
We present an overall assurance framework for Learning-Enabled Systems (LES)
We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers.
We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM.
arXiv Detail & Related papers (2021-11-30T14:39:22Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - SAMBA: Safe Model-Based & Active Reinforcement Learning [59.01424351231993]
SAMBA is a framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics.
We evaluate our algorithm on a variety of safe dynamical system benchmarks involving both low and high-dimensional state representations.
We provide intuition as to the effectiveness of the framework by a detailed analysis of our active metrics and safety constraints.
arXiv Detail & Related papers (2020-06-12T10:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.