Tighter Abstract Queries in Neural Network Verification
- URL: http://arxiv.org/abs/2210.12871v2
- Date: Sun, 14 May 2023 22:29:05 GMT
- Title: Tighter Abstract Queries in Neural Network Verification
- Authors: Elazar Cohen, Yizhak Yisrael Elboher, Clark Barrett, Guy Katz
- Abstract summary: We present CEGARETTE, a novel verification mechanism where both the system and the property are abstracted and refined simultaneously.
Our results are very promising, and demonstrate a significant improvement in performance over multiple benchmarks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks have become critical components of reactive systems in
various domains within computer science. Despite their excellent performance,
using neural networks entails numerous risks that stem from our lack of ability
to understand and reason about their behavior. Due to these risks, various
formal methods have been proposed for verifying neural networks; but
unfortunately, these typically struggle with scalability barriers. Recent
attempts have demonstrated that abstraction-refinement approaches could play a
significant role in mitigating these limitations; but these approaches can
often produce networks that are so abstract, that they become unsuitable for
verification. To deal with this issue, we present CEGARETTE, a novel
verification mechanism where both the system and the property are abstracted
and refined simultaneously. We observe that this approach allows us to produce
abstract networks which are both small and sufficiently accurate, allowing for
quick verification times while avoiding a large number of refinement steps. For
evaluation purposes, we implemented CEGARETTE as an extension to the recently
proposed CEGAR-NN framework. Our results are very promising, and demonstrate a
significant improvement in performance over multiple benchmarks.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - On Optimizing Back-Substitution Methods for Neural Network Verification [1.4394939014120451]
We present an approach for making back-substitution produce tighter bounds.
Our technique is general, in the sense that it can be integrated into numerous existing symbolic-bound propagation techniques.
arXiv Detail & Related papers (2022-08-16T11:16:44Z) - Neural Network Verification using Residual Reasoning [0.0]
We present an enhancement to abstraction-based verification of neural networks, by using emphresidual reasoning.
In essence, the method allows the verifier to store information about parts of the search space in which the refined network is guaranteed to behave correctly.
arXiv Detail & Related papers (2022-08-05T10:39:04Z) - Abstraction and Refinement: Towards Scalable and Exact Verification of
Neural Networks [9.85360493553261]
Deep neural networks (DNNs) have been increasingly deployed in practice, but the lack of robustness hinders their applications in safety-critical domains.
We present a novel abstraction-refinement approach for scalable and exact DNN verification.
arXiv Detail & Related papers (2022-07-02T07:04:20Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - An Abstraction-Refinement Approach to Verifying Convolutional Neural
Networks [0.0]
We present the Cnn-Abs framework, which is aimed at the verification of convolutional networks.
The core of Cnn-Abs is an abstraction-refinement technique, which simplifies the verification problem.
Cnn-Abs can significantly boost the performance of a state-of-the-art verification engine, reducing runtime by 15.7% on average.
arXiv Detail & Related papers (2022-01-06T08:57:43Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Verifying Recurrent Neural Networks using Invariant Inference [0.0]
We propose a novel approach for verifying properties of a widespread variant of neural networks, called recurrent neural networks.
Our approach is based on the inference of invariants, which allow us to reduce the complex problem of verifying recurrent networks into simpler, non-recurrent problems.
arXiv Detail & Related papers (2020-04-06T08:08:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.