Verifying Global Neural Network Specifications using Hyperproperties
- URL: http://arxiv.org/abs/2306.12495v1
- Date: Wed, 21 Jun 2023 18:08:55 GMT
- Title: Verifying Global Neural Network Specifications using Hyperproperties
- Authors: David Boetius and Stefan Leue
- Abstract summary: We study global specifications that provide guarantees for all potential inputs.
Our formalism enables verifying global specifications using existing neural network verification approaches.
Recent success in verifying specific global specifications shows that attaining strong guarantees for all potential data points is feasible.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current approaches to neural network verification focus on specifications
that target small regions around known input data points, such as local
robustness. Thus, using these approaches, we can not obtain guarantees for
inputs that are not close to known inputs. Yet, it is highly likely that a
neural network will encounter such truly unseen inputs during its application.
We study global specifications that - when satisfied - provide guarantees for
all potential inputs. We introduce a hyperproperty formalism that allows for
expressing global specifications such as monotonicity, Lipschitz continuity,
global robustness, and dependency fairness. Our formalism enables verifying
global specifications using existing neural network verification approaches by
leveraging capabilities for verifying general computational graphs. Thereby, we
extend the scope of guarantees that can be provided using existing methods.
Recent success in verifying specific global specifications shows that attaining
strong guarantees for all potential data points is feasible.
Related papers
- A Formally Verified Robustness Certifier for Neural Networks (Extended Version) [0.0]
Neural networks are susceptible to minor perturbations in input that cause them to misclassify.<n>Globally-robust neural networks employ a function to certify that the classification of an input cannot be altered by such a perturbation.<n>We describe the program, its specifications, and the important design decisions taken for its implementation and verification.
arXiv Detail & Related papers (2025-05-11T12:05:14Z) - Global Convergence and Rich Feature Learning in $L$-Layer Infinite-Width Neural Networks under $μ$P Parametrization [66.03821840425539]
In this paper, we investigate the training dynamics of $L$-layer neural networks using the tensor gradient program (SGD) framework.
We show that SGD enables these networks to learn linearly independent features that substantially deviate from their initial values.
This rich feature space captures relevant data information and ensures that any convergent point of the training process is a global minimum.
arXiv Detail & Related papers (2025-03-12T17:33:13Z) - VeriFlow: Modeling Distributions for Neural Network Verification [3.510536859655114]
Formal verification has emerged as a promising method to ensure the safety and reliability of neural networks.<n>We propose the VeriFlow architecture as a flow-based density model tailored to allow any verification approach to restrict its search to some data distribution of interest.
arXiv Detail & Related papers (2024-06-20T12:41:39Z) - Prospector Heads: Generalized Feature Attribution for Large Models & Data [82.02696069543454]
We introduce prospector heads, an efficient and interpretable alternative to explanation-based attribution methods.
We demonstrate how prospector heads enable improved interpretation and discovery of class-specific patterns in input data.
arXiv Detail & Related papers (2024-02-18T23:01:28Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Toward Reliable Neural Specifications [3.2722498341029653]
Existing specifications for neural networks are in the paradigm of data as specification.
We propose a new family of specifications called neural representation as specification.
We show that by using NAP, we can verify the prediction of the entire input space, while still recalling 84% of the data.
arXiv Detail & Related papers (2022-10-28T13:21:28Z) - Sound and Complete Verification of Polynomial Networks [55.9260539566555]
Polynomial Networks (PNs) have demonstrated promising performance on face and image recognition recently.
Existing verification algorithms on ReLU neural networks (NNs) based on branch and bound (BaB) techniques cannot be trivially applied to PN verification.
We devise a new bounding method, equipped with BaB for global convergence guarantees, called VPN.
arXiv Detail & Related papers (2022-09-15T11:50:43Z) - Learning Invariances with Generalised Input-Convex Neural Networks [3.5611181253285253]
We introduce a novel class of flexible neural networks that represent functions that are guaranteed to have connected level sets forming smooth networks.
We show that our novel technique for characterising invariances is a powerful generative data exploration tool in real-world applications, such as computational chemistry.
arXiv Detail & Related papers (2022-04-14T15:03:30Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Globally-Robust Neural Networks [21.614262520734595]
We formalize a notion of global robustness, which captures the operational properties of on-line local robustness certification.
We show that widely-used architectures can be easily adapted to this objective by incorporating efficient global Lipschitz bounds into the network.
arXiv Detail & Related papers (2021-02-16T21:10:52Z) - Data-Driven Assessment of Deep Neural Networks with Random Input
Uncertainty [14.191310794366075]
We develop a data-driven optimization-based method capable of simultaneously certifying the safety of network outputs and localizing them.
We experimentally demonstrate the efficacy and tractability of the method on a deep ReLU network.
arXiv Detail & Related papers (2020-10-02T19:13:35Z) - Global Robustness Verification Networks [33.52550848953545]
We develop a global robustness verification framework with three components.
New network architecture Sliding Door Network (SDN) enabling feasible rule-based back-propagation''
We demonstrate the effectiveness of our approach on both synthetic and real datasets.
arXiv Detail & Related papers (2020-06-08T08:09:20Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.