Approach Towards Semi-Automated Certification for Low Criticality ML-Enabled Airborne Applications
- URL: http://arxiv.org/abs/2501.17028v1
- Date: Tue, 28 Jan 2025 15:49:51 GMT
- Title: Approach Towards Semi-Automated Certification for Low Criticality ML-Enabled Airborne Applications
- Authors: Chandrasekar Sridhar, Vyakhya Gupta, Prakhar Jain, Karthik Vaidhyanathan,
- Abstract summary: This paper proposes a semi automated certification approach, specifically for low criticality ML systems.
Key aspects include structured classification to guide certification rigor on system attributes, an Assurance Profile that consolidates evaluation outcomes into a confidence measure the ML component, and methodologies for integrating human oversight into certification activities.
- Score: 0.0
- License:
- Abstract: As Machine Learning (ML) makes its way into aviation, ML enabled systems including low criticality systems require a reliable certification process to ensure safety and performance. Traditional standards, like DO 178C, which are used for critical software in aviation, do not fully cover the unique aspects of ML. This paper proposes a semi automated certification approach, specifically for low criticality ML systems, focusing on data and model validation, resilience assessment, and usability assurance while integrating manual and automated processes. Key aspects include structured classification to guide certification rigor on system attributes, an Assurance Profile that consolidates evaluation outcomes into a confidence measure the ML component, and methodologies for integrating human oversight into certification activities. Through a case study with a YOLOv8 based object detection system designed to classify military and civilian vehicles in real time for reconnaissance and surveillance aircraft, we show how this approach supports the certification of ML systems in low criticality airborne applications.
Related papers
- SafeBench: A Safety Evaluation Framework for Multimodal Large Language Models [75.67623347512368]
We propose toolns, a comprehensive framework designed for conducting safety evaluations of MLLMs.
Our framework consists of a comprehensive harmful query dataset and an automated evaluation protocol.
Based on our framework, we conducted large-scale experiments on 15 widely-used open-source MLLMs and 6 commercial MLLMs.
arXiv Detail & Related papers (2024-10-24T17:14:40Z) - Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics [54.57914943017522]
We highlight the critical issues of robustness and safety associated with integrating large language models (LLMs) and vision-language models (VLMs) into robotics applications.
arXiv Detail & Related papers (2024-02-15T22:01:45Z) - Runway Sign Classifier: A DAL C Certifiable Machine Learning System [4.012351415340318]
We present a case study of an airborne system utilizing a Deep Neural Network (DNN) for airport sign detection and classification.
To achieve DAL C, we employ an established architectural mitigation technique involving two redundant and dissimilar DNNs.
This work is intended to illustrate how the certification challenges of ML-based systems can be addressed for medium criticality airborne applications.
arXiv Detail & Related papers (2023-10-10T10:26:30Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Rethinking Certification for Trustworthy Machine Learning-Based
Applications [3.886429361348165]
Machine Learning (ML) is increasingly used to implement advanced applications with non-deterministic behavior.
Existing certification schemes are not immediately applicable to non-deterministic applications built on ML models.
This article analyzes the challenges and deficiencies of current certification schemes, discusses open research issues, and proposes a first certification scheme for ML-based applications.
arXiv Detail & Related papers (2023-05-26T11:06:28Z) - Benchmarking Automated Machine Learning Methods for Price Forecasting
Applications [58.720142291102135]
We show the possibility of substituting manually created ML pipelines with automated machine learning (AutoML) solutions.
Based on the CRISP-DM process, we split the manual ML pipeline into a machine learning and non-machine learning part.
We show in a case study for the industrial use case of price forecasting, that domain knowledge combined with AutoML can weaken the dependence on ML experts.
arXiv Detail & Related papers (2023-04-28T10:27:38Z) - Toward Certification of Machine-Learning Systems for Low Criticality
Airborne Applications [0.0]
Possible airborne applications of machine learning (ML) include safety-critical functions.
Current certification standards for the aviation industry were developed prior to the ML renaissance.
There are some fundamental incompatibilities between traditional design assurance approaches and certain aspects of ML-based systems.
arXiv Detail & Related papers (2022-09-28T10:13:28Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z) - Reliability Assessment and Safety Arguments for Machine Learning
Components in Assuring Learning-Enabled Autonomous Systems [19.65793237440738]
We present an overall assurance framework for Learning-Enabled Systems (LES)
We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers.
We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM.
arXiv Detail & Related papers (2021-11-30T14:39:22Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Manifold for Machine Learning Assurance [9.594432031144716]
We propose an analogous approach for machine-learning (ML) systems using an ML technique that extracts from the high-dimensional training data implicitly describing the required system.
It is then harnessed for a range of quality assurance tasks such as test adequacy measurement, test input generation, and runtime monitoring of the target ML system.
Preliminary experiments establish that the proposed manifold-based approach, for test adequacy drives diversity in test data, for test generation yields fault-revealing yet realistic test cases, and for runtime monitoring provides an independent means to assess trustability of the target system's output.
arXiv Detail & Related papers (2020-02-08T11:39:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.