A Stochastic Approach to Classification Error Estimates in Convolutional
Neural Networks
- URL: http://arxiv.org/abs/2401.06156v1
- Date: Thu, 21 Dec 2023 15:31:52 GMT
- Title: A Stochastic Approach to Classification Error Estimates in Convolutional
Neural Networks
- Authors: Jan Peleska, Felix Br\"uning, Mario Gleirscher, Wen-ling Huang
- Abstract summary: We use the obstacle detection function needed in future autonomous freight trains with Grade of Automation (GoA) 4.
We present a quantitative analysis of the system-level hazard rate to be expected from an obstacle detection function.
It is shown that using sensor/perceptor fusion, the fused detection system can meet the tolerable hazard rate deemed to be acceptable for the safety integrity level to be applied.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This technical report presents research results achieved in the field of
verification of trained Convolutional Neural Network (CNN) used for image
classification in safety-critical applications. As running example, we use the
obstacle detection function needed in future autonomous freight trains with
Grade of Automation (GoA) 4. It is shown that systems like GoA 4 freight trains
are indeed certifiable today with new standards like ANSI/UL 4600 and ISO 21448
used in addition to the long-existing standards EN 50128 and EN 50129.
Moreover, we present a quantitative analysis of the system-level hazard rate to
be expected from an obstacle detection function. It is shown that using
sensor/perceptor fusion, the fused detection system can meet the tolerable
hazard rate deemed to be acceptable for the safety integrity level to be
applied (SIL-3). A mathematical analysis of CNN models is performed which
results in the identification of classification clusters and equivalence
classes partitioning the image input space of the CNN. These clusters and
classes are used to introduce a novel statistical testing method for
determining the residual error probability of a trained CNN and an associated
upper confidence limit. We argue that this greybox approach to CNN
verification, taking into account the CNN model's internal structure, is
essential for justifying that the statistical tests have covered the trained
CNN with its neurons and inter-layer mappings in a comprehensive way.
Related papers
- Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Performance evaluation of Machine learning algorithms for Intrusion Detection System [0.40964539027092917]
This paper focuses on intrusion detection systems (IDSs) analysis using Machine Learning (ML) techniques.
We analyze the KDD CUP-'99' intrusion detection dataset used for training and validating ML models.
arXiv Detail & Related papers (2023-10-01T06:35:37Z) - Target Detection on Hyperspectral Images Using MCMC and VI Trained
Bayesian Neural Networks [0.0]
Bayesian neural networks (BNN) provide uncertainty quantification (UQ) for NN predictions and estimates.
We apply and compare MCMC- and VI-trained BNN in the context of target detection in hyperspectral imagery (HSI)
Both models are trained using out-of-the-box tools on a high fidelity HSI target detection scene.
arXiv Detail & Related papers (2023-08-11T01:35:54Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Robust-by-Design Classification via Unitary-Gradient Neural Networks [66.17379946402859]
The use of neural networks in safety-critical systems requires safe and robust models, due to the existence of adversarial attacks.
Knowing the minimal adversarial perturbation of any input x, or, equivalently, the distance of x from the classification boundary, allows evaluating the classification robustness, providing certifiable predictions.
A novel network architecture named Unitary-Gradient Neural Network is presented.
Experimental results show that the proposed architecture approximates a signed distance, hence allowing an online certifiable classification of x at the cost of a single inference.
arXiv Detail & Related papers (2022-09-09T13:34:51Z) - VPN: Verification of Poisoning in Neural Networks [11.221552724154988]
We study another neural network security issue, namely data poisoning.
In this case an attacker inserts a trigger into a subset of the training data, in such a way that at test time, this trigger in an input causes the trained model to misclassify to some target class.
We show how to formulate the check for data poisoning as a property that can be checked with off-the-shelf verification tools.
arXiv Detail & Related papers (2022-05-08T15:16:05Z) - SAFE-OCC: A Novelty Detection Framework for Convolutional Neural Network
Sensors and its Application in Process Control [0.0]
We present a novelty detection framework for Convolutional Neural Network (CNN) sensors that we call Sensor-Activated Feature Extraction One-Class Classification (SAFE-OCC)
We show that this framework enables the safe use of computer vision sensors in process control architectures.
arXiv Detail & Related papers (2022-02-03T19:47:55Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - A Simple Framework to Quantify Different Types of Uncertainty in Deep
Neural Networks for Image Classification [0.0]
Quantifying uncertainty in a model's predictions is important as it enables the safety of an AI system to be increased.
This is crucial for applications where the cost of an error is high, such as in autonomous vehicle control, medical image analysis, financial estimations or legal fields.
We propose a complete framework to capture and quantify three known types of uncertainty in Deep Neural Networks for the task of image classification.
arXiv Detail & Related papers (2020-11-17T15:36:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.