Set-Based Training for Neural Network Verification
- URL: http://arxiv.org/abs/2401.14961v3
- Date: Tue, 14 Jan 2025 08:56:48 GMT
- Title: Set-Based Training for Neural Network Verification
- Authors: Lukas Koller, Tobias Ladner, Matthias Althoff,
- Abstract summary: Small input perturbations can significantly affect the outputs of a neural network.<n>To ensure safety of safety-critical environments, the robustness of a neural network must be verified.<n>We present a novel set-based training procedure in which we compute the set of possible outputs.
- Score: 8.97708612393722
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks are vulnerable to adversarial attacks, i.e., small input perturbations can significantly affect the outputs of a neural network. Therefore, to ensure safety of safety-critical environments, the robustness of a neural network must be formally verified against input perturbations, e.g., from noisy sensors. To improve the robustness of neural networks and thus simplify the formal verification, we present a novel set-based training procedure in which we compute the set of possible outputs given the set of possible inputs and compute for the first time a gradient set, i.e., each possible output has a different gradient. Therefore, we can directly reduce the size of the output enclosure by choosing gradients toward its center. Small output enclosures increase the robustness of a neural network and, at the same time, simplify its formal verification. The latter benefit is due to the fact that a larger size of propagated sets increases the conservatism of most verification methods. Our extensive evaluation demonstrates that set-based training produces robust neural networks with competitive performance, which can be verified using fast (polynomial-time) verification algorithms due to the reduced output set.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Adversarial Training Using Feedback Loops [1.6114012813668932]
Deep neural networks (DNNs) are highly susceptible to adversarial attacks due to limited generalizability.
This paper proposes a new robustification approach based on control theory.
The novel adversarial training approach based on the feedback control architecture is called Feedback Looped Adversarial Training (FLAT)
arXiv Detail & Related papers (2023-08-23T02:58:02Z) - Expediting Neural Network Verification via Network Reduction [4.8621567234713305]
We propose a network reduction technique as a pre-processing method prior to verification.
The proposed method reduces neural networks via eliminating stable ReLU neurons, and transforming them into a sequential neural network.
We instantiate the reduction technique on the state-of-the-art complete and incomplete verification tools.
arXiv Detail & Related papers (2023-08-07T06:23:24Z) - Fully Automatic Neural Network Reduction for Formal Verification [8.017543518311196]
We introduce a fully automatic and sound reduction of neural networks using reachability analysis.
The soundness ensures that the verification of the reduced network entails the verification of the original network.
We show that our approach can reduce the number of neurons to a fraction of the original number of neurons with minor outer-approximation.
arXiv Detail & Related papers (2023-05-03T07:13:47Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Neural Clamping: Joint Input Perturbation and Temperature Scaling for Neural Network Calibration [62.4971588282174]
We propose a new post-processing calibration method called Neural Clamping.
Our empirical results show that Neural Clamping significantly outperforms state-of-the-art post-processing calibration methods.
arXiv Detail & Related papers (2022-09-23T14:18:39Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - ASAT: Adaptively Scaled Adversarial Training in Time Series [21.65050910881857]
We take the first step to introduce adversarial training in time series analysis by taking the finance field as an example.
We propose the adaptively scaled adversarial training (ASAT) in time series analysis, by treating data at different time slots with time-dependent importance weights.
Experimental results show that the proposed ASAT can improve both the accuracy and the adversarial robustness of neural networks.
arXiv Detail & Related papers (2021-08-20T03:13:34Z) - Verifying Low-dimensional Input Neural Networks via Input Quantization [12.42030531015912]
This paper revisits the original problem of verifying ACAS Xu networks.
We propose to prepend an input quantization layer to the network.
Our technique can deliver exact verification results immune to floating-point error.
arXiv Detail & Related papers (2021-08-18T03:42:05Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Performance Bounds for Neural Network Estimators: Applications in Fault
Detection [2.388501293246858]
We exploit recent results in quantifying the robustness of neural networks to construct and tune a model-based anomaly detector.
In tuning, we specifically provide upper bounds on the rate of false alarms expected under normal operation.
arXiv Detail & Related papers (2021-03-22T19:23:08Z) - Artificial Neural Networks generated by Low Discrepancy Sequences [59.51653996175648]
We generate artificial neural networks as random walks on a dense network graph.
Such networks can be trained sparse from scratch, avoiding the expensive procedure of training a dense network and compressing it afterwards.
We demonstrate that the artificial neural networks generated by low discrepancy sequences can achieve an accuracy within reach of their dense counterparts at a much lower computational complexity.
arXiv Detail & Related papers (2021-03-05T08:45:43Z) - Reduced-Order Neural Network Synthesis with Robustness Guarantees [0.0]
Machine learning algorithms are being adapted to run locally on board, potentially hardware limited, devices to improve user privacy, reduce latency and be more energy efficient.
To address this issue, a method to automatically synthesize reduced-order neural networks (having fewer neurons) approxing the input/output mapping of a larger one is introduced.
Worst-case bounds for this approximation error are obtained and the approach can be applied to a wide variety of neural networks architectures.
arXiv Detail & Related papers (2021-02-18T12:03:57Z) - Towards Repairing Neural Networks Correctly [6.600380575920419]
We propose a runtime verification method to ensure the correctness of neural networks.
Experiment results show that our approach effectively generates neural networks which are guaranteed to satisfy the properties.
arXiv Detail & Related papers (2020-12-03T12:31:07Z) - An SMT-Based Approach for Verifying Binarized Neural Networks [1.4394939014120451]
We propose an SMT-based technique for verifying Binarized Neural Networks.
One novelty of our technique is that it allows the verification of neural networks that include both binarized and non-binarized components.
We implement our technique as an extension to the Marabou framework, and use it to evaluate the approach on popular binarized neural network architectures.
arXiv Detail & Related papers (2020-11-05T16:21:26Z) - Feature Purification: How Adversarial Training Performs Robust Deep
Learning [66.05472746340142]
We show a principle that we call Feature Purification, where we show one of the causes of the existence of adversarial examples is the accumulation of certain small dense mixtures in the hidden weights during the training process of a neural network.
We present both experiments on the CIFAR-10 dataset to illustrate this principle, and a theoretical result proving that for certain natural classification tasks, training a two-layer neural network with ReLU activation using randomly gradient descent indeed this principle.
arXiv Detail & Related papers (2020-05-20T16:56:08Z) - Resolution Adaptive Networks for Efficient Inference [53.04907454606711]
We propose a novel Resolution Adaptive Network (RANet), which is inspired by the intuition that low-resolution representations are sufficient for classifying "easy" inputs.
In RANet, the input images are first routed to a lightweight sub-network that efficiently extracts low-resolution representations.
High-resolution paths in the network maintain the capability to recognize the "hard" samples.
arXiv Detail & Related papers (2020-03-16T16:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.