Framework for Certification of AI-Based Systems
- URL: http://arxiv.org/abs/2302.11049v1
- Date: Tue, 21 Feb 2023 23:08:37 GMT
- Title: Framework for Certification of AI-Based Systems
- Authors: Maxime Gariel, Brian Shimanuki, Rob Timpe, Evan Wilson
- Abstract summary: The current certification process for aerospace software is not adapted to "AI-based" algorithms such as deep neural networks.
This paper proposes a framework and principles that could be used to establish certification methods for neural network models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The current certification process for aerospace software is not adapted to
"AI-based" algorithms such as deep neural networks. Unlike traditional
aerospace software, the precise parameters optimized during neural network
training are as important as (or more than) the code processing the network and
they are not directly mathematically understandable. Despite their lack of
explainability such algorithms are appealing because for some applications they
can exhibit high performance unattainable with any traditional explicit
line-by-line software methods.
This paper proposes a framework and principles that could be used to
establish certification methods for neural network models for which the current
certification processes such as DO-178 cannot be applied. While it is not a
magic recipe, it is a set of common sense steps that will allow the applicant
and the regulator increase their confidence in the developed software, by
demonstrating the capabilities to bring together, trace, and track the
requirements, data, software, training process, and test results.
Related papers
- Formal and Practical Elements for the Certification of Machine Learning
Systems [0.9217021281095907]
We show how parameters of machine learning models are not hand-coded nor derived from physics but learned from data.
As a result, requirements cannot be directly traced to lines of code, hindering the current bottom-up aerospace certification paradigm.
Based on a scalable statistical verifier, our proposed framework is model-agnostic and tool-independent.
arXiv Detail & Related papers (2023-10-05T00:20:59Z) - Using Machine Learning To Identify Software Weaknesses From Software
Requirement Specifications [49.1574468325115]
This research focuses on finding an efficient machine learning algorithm to identify software weaknesses from requirement specifications.
Keywords extracted using latent semantic analysis help map the CWE categories to PROMISE_exp. Naive Bayes, support vector machine (SVM), decision trees, neural network, and convolutional neural network (CNN) algorithms were tested.
arXiv Detail & Related papers (2023-08-10T13:19:10Z) - An FPGA Architecture for Online Learning using the Tsetlin Machine [5.140342614848069]
This paper proposes a novel field-programmable gate-array infrastructure for online learning.
It implements a low-complexity machine learning algorithm called the Tsetlin Machine.
We present use cases for online learning using the proposed infrastructure and demonstrate the energy/performance/accuracy trade-offs.
arXiv Detail & Related papers (2023-06-01T13:33:26Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Improving Compositionality of Neural Networks by Decoding
Representations to Inputs [83.97012077202882]
We bridge the benefits of traditional and deep learning programs by jointly training a generative model to constrain neural network activations to "decode" back to inputs.
We demonstrate applications of decodable representations to out-of-distribution detection, adversarial examples, calibration, and fairness.
arXiv Detail & Related papers (2021-06-01T20:07:16Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Designing Neural Networks for Real-Time Systems [9.28438818579034]
Most approaches in the literature consider guaranteeing only the functionality of ANN based controllers.
We propose a design pipeline whereby neural networks trained using the popular deep learning framework Keras are compiled to functionally equivalent C code.
arXiv Detail & Related papers (2020-08-26T21:41:37Z) - Robust Pruning at Initialization [61.30574156442608]
A growing need for smaller, energy-efficient, neural networks to be able to use machine learning applications on devices with limited computational resources.
For Deep NNs, such procedures remain unsatisfactory as the resulting pruned networks can be difficult to train and, for instance, they do not prevent one layer from being fully pruned.
arXiv Detail & Related papers (2020-02-19T17:09:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.