Training Safe Neural Networks with Global SDP Bounds
- URL: http://arxiv.org/abs/2409.09687v1
- Date: Sun, 15 Sep 2024 10:50:22 GMT
- Title: Training Safe Neural Networks with Global SDP Bounds
- Authors: Roman Soletskyi, David "davidad" Dalrymple,
- Abstract summary: We present a novel approach to training neural networks with formal safety guarantees using semidefinite programming (SDP) for verification.
Our method focuses on verifying safety over large, high-dimensional input regions, addressing limitations of existing techniques that focus on adversarial bounds.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel approach to training neural networks with formal safety guarantees using semidefinite programming (SDP) for verification. Our method focuses on verifying safety over large, high-dimensional input regions, addressing limitations of existing techniques that focus on adversarial robustness bounds. We introduce an ADMM-based training scheme for an accurate neural network classifier on the Adversarial Spheres dataset, achieving provably perfect recall with input dimensions up to $d=40$. This work advances the development of reliable neural network verification methods for high-dimensional systems, with potential applications in safe RL policies.
Related papers
- Adaptive Intrusion Detection System Leveraging Dynamic Neural Models with Adversarial Learning for 5G/6G Networks [2.062593640149623]
This paper presents an advanced IDS framework that leverages adversarial training and dynamic neural networks in 5G/6G networks.<n>Unlike conventional models, which require costly retraining to update knowledge, the proposed framework integrates incremental learning algorithms, reducing the need for frequent retraining.
arXiv Detail & Related papers (2025-12-11T13:40:37Z) - Scalable Verification of Neural Control Barrier Functions Using Linear Bound Propagation [50.53301323864253]
Control barrier functions (CBFs) are a popular tool for safety certification of nonlinear dynamical control systems.<n>We present a novel framework for verifying neural CBFs based on piecewise linear upper and lower bounds on the conditions required for a neural network to be a CBF.<n>Our approach scales to larger neural networks than state-of-the-art verification procedures for CBFs.
arXiv Detail & Related papers (2025-11-09T11:51:15Z) - Certified Neural Approximations of Nonlinear Dynamics [51.01318247729693]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Efficient Reachability Analysis for Convolutional Neural Networks Using Hybrid Zonotopes [4.32258850473064]
Existing set propagation-based reachability analysis methods for feedforward neural networks often struggle to achieve both scalability and accuracy.
This work presents a novel set-based approach for computing the reachable sets of convolutional neural networks.
arXiv Detail & Related papers (2025-03-13T19:45:26Z) - Deep Learning Algorithms Used in Intrusion Detection Systems -- A Review [0.0]
This review paper studies recent advancements in the application of deep learning techniques, including CNN, Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), Deep Neural Networks (DNN), Long Short-Term Memory (LSTM), autoencoders (AE), Multi-Layer Perceptrons (MLP), Self-Normalizing Networks (SNN) and hybrid models, within network intrusion detection systems.
arXiv Detail & Related papers (2024-02-26T20:57:35Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Data-Driven Assessment of Deep Neural Networks with Random Input
Uncertainty [14.191310794366075]
We develop a data-driven optimization-based method capable of simultaneously certifying the safety of network outputs and localizing them.
We experimentally demonstrate the efficacy and tractability of the method on a deep ReLU network.
arXiv Detail & Related papers (2020-10-02T19:13:35Z) - SecDD: Efficient and Secure Method for Remotely Training Neural Networks [13.70633147306388]
We leverage what are typically considered the worst qualities of deep learning algorithms.
We create a method for the secure and efficient training of remotely deployed neural networks over unsecured channels.
arXiv Detail & Related papers (2020-09-19T03:37:44Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Protecting the integrity of the training procedure of neural networks [0.0]
neural networks are used for an ever-increasing number of applications.
One of the most striking IT security problems aggravated by the opacity of neural networks is the possibility of poisoning attacks during the training phase.
We propose an approach to this problem which allows provably verifying the integrity of the training procedure by making use of standard cryptographic mechanisms.
arXiv Detail & Related papers (2020-05-14T12:57:23Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z) - HYDRA: Pruning Adversarially Robust Neural Networks [58.061681100058316]
Deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size.
We propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune.
We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously.
arXiv Detail & Related papers (2020-02-24T19:54:53Z) - Scalable Quantitative Verification For Deep Neural Networks [44.570783946111334]
We propose a test-driven verification framework for deep neural networks (DNNs)
Our technique performs enough tests until soundness of a formal probabilistic property can be proven.
Our work paves the way for verifying properties of distributions captured by real-world deep neural networks, with provable guarantees.
arXiv Detail & Related papers (2020-02-17T09:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.