Verification of Neural Networks against Convolutional Perturbations via Parameterised Kernels
- URL: http://arxiv.org/abs/2411.04594v2
- Date: Mon, 17 Feb 2025 19:37:58 GMT
- Title: Verification of Neural Networks against Convolutional Perturbations via Parameterised Kernels
- Authors: Benedikt Brückner, Alessio Lomuscio,
- Abstract summary: We develop a method for the efficient verification of neural networks against convolutional perturbations such as blurring or sharpening.<n>To define input perturbations we use well-known camera shake, box blur and sharpen kernels.<n>To facilitate their use in neural network verification, we develop an efficient way of convolving a given input with these parameterised kernels.
- Score: 18.052298354970258
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We develop a method for the efficient verification of neural networks against convolutional perturbations such as blurring or sharpening. To define input perturbations we use well-known camera shake, box blur and sharpen kernels. We demonstrate that these kernels can be linearly parameterised in a way that allows for a variation of the perturbation strength while preserving desired kernel properties. To facilitate their use in neural network verification, we develop an efficient way of convolving a given input with these parameterised kernels. The result of this convolution can be used to encode the perturbation in a verification setting by prepending a linear layer to a given network. This leads to tight bounds and a high effectiveness in the resulting verification step. We add further precision by employing input splitting as a branch and bound strategy. We demonstrate that we are able to verify robustness on a number of standard benchmarks where the baseline is unable to provide any safety certificates. To the best of our knowledge, this is the first solution for verifying robustness against specific convolutional perturbations such as camera shake.
Related papers
- Out of the Shadows: Exploring a Latent Space for Neural Network Verification [8.97708612393722]
We present an efficient verification tool for neural networks that uses our iterative refinement to significantly reduce the number of subproblems in a branch-and-bound procedure.<n>We demonstrate that our tool achieves competitive performance, which would place it among the top-ranking tools of the last neural network verification competition.
arXiv Detail & Related papers (2025-05-23T13:05:07Z) - Provably-Safe Neural Network Training Using Hybrid Zonotope Reachability Analysis [0.46040036610482665]
It is difficult to enforce constraints on neural networks in safety-critical control applications.
We propose a method that can encourage exact image of a non-avoidine input set for a neural network with rectified linear unit (ReLU) nonlinearities.
We demonstrate the practicality of our method by training a forward-invariant neural network controller for non-avoidine input to a safety-critical system.
arXiv Detail & Related papers (2025-01-22T17:13:48Z) - Convex neural network synthesis for robustness in the 1-norm [0.0]
This paper proposes a method to generate an approximation of a neural network which is certifiably more robust.
An application to robustifying model predictive control is used to demonstrate the results.
arXiv Detail & Related papers (2024-05-29T12:17:09Z) - Set-Based Training for Neural Network Verification [8.97708612393722]
Small input perturbations can significantly affect the outputs of a neural network.
To ensure safety of safety-critical environments, the robustness of a neural network must be verified.
We present a novel set-based training procedure in which we compute the set of possible outputs.
arXiv Detail & Related papers (2024-01-26T15:52:41Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - Robust Explanation Constraints for Neural Networks [33.14373978947437]
Post-hoc explanation methods used with the intent of neural networks are sometimes said to help engender trust in their outputs.
Our training method is the only method able to learn neural networks with insights about robustness tested across all six tested networks.
arXiv Detail & Related papers (2022-12-16T14:40:25Z) - Towards Practical Control of Singular Values of Convolutional Layers [65.25070864775793]
Convolutional neural networks (CNNs) are easy to train, but their essential properties, such as generalization error and adversarial robustness, are hard to control.
Recent research demonstrated that singular values of convolutional layers significantly affect such elusive properties.
We offer a principled approach to alleviating constraints of the prior art at the expense of an insignificant reduction in layer expressivity.
arXiv Detail & Related papers (2022-11-24T19:09:44Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - An Orthogonal Classifier for Improving the Adversarial Robustness of
Neural Networks [21.13588742648554]
Recent efforts have shown that imposing certain modifications on classification layer can improve the robustness of the neural networks.
We explicitly construct a dense orthogonal weight matrix whose entries have the same magnitude, leading to a novel robust classifier.
Our method is efficient and competitive to many state-of-the-art defensive approaches.
arXiv Detail & Related papers (2021-05-19T13:12:14Z) - Performance Bounds for Neural Network Estimators: Applications in Fault
Detection [2.388501293246858]
We exploit recent results in quantifying the robustness of neural networks to construct and tune a model-based anomaly detector.
In tuning, we specifically provide upper bounds on the rate of false alarms expected under normal operation.
arXiv Detail & Related papers (2021-03-22T19:23:08Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.