Toward Reliable Neural Specifications
- URL: http://arxiv.org/abs/2210.16114v1
- Date: Fri, 28 Oct 2022 13:21:28 GMT
- Title: Toward Reliable Neural Specifications
- Authors: Chuqin Geng, Nham Le, Xiaojie Xu, Zhaoyue Wang, Arie Gurfinkel, Xujie
Si
- Abstract summary: Existing specifications for neural networks are in the paradigm of data as specification.
We propose a new family of specifications called neural representation as specification.
We show that by using NAP, we can verify the prediction of the entire input space, while still recalling 84% of the data.
- Score: 3.2722498341029653
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Having reliable specifications is an unavoidable challenge in achieving
verifiable correctness, robustness, and interpretability of AI systems.
Existing specifications for neural networks are in the paradigm of data as
specification. That is, the local neighborhood centering around a reference
input is considered to be correct (or robust). However, our empirical study
shows that such a specification is extremely overfitted since usually no data
points from the testing set lie in the certified region of the reference input,
making them impractical for real-world applications. We propose a new family of
specifications called neural representation as specification, which uses the
intrinsic information of neural networks - neural activation patterns (NAP),
rather than input data to specify the correctness and/or robustness of neural
network predictions. We present a simple statistical approach to mining
dominant neural activation patterns. We analyze NAPs from a statistical point
of view and find that a single NAP can cover a large number of training and
testing data points whereas ad hoc data-as-specification only covers the given
reference data point. To show the effectiveness of discovered NAPs, we formally
verify several important properties, such as various types of
misclassifications will never happen for a given NAP, and there is no-ambiguity
between different NAPs. We show that by using NAP, we can verify the prediction
of the entire input space, while still recalling 84% of the data. Thus, we
argue that using NAPs is a more reliable and extensible specification for
neural network verification.
Related papers
- VeriFlow: Modeling Distributions for Neural Network Verification [4.3012765978447565]
Formal verification has emerged as a promising method to ensure the safety and reliability of neural networks.
We propose the VeriFlow architecture as a flow based density model tailored to allow any verification approach to restrict its search to the some data distribution of interest.
arXiv Detail & Related papers (2024-06-20T12:41:39Z) - Learning Minimal Neural Specifications [5.497856406348316]
Given a neural network, find a minimal (general) NAP specification that is sufficient for formal verification of the network's robustness.
We propose several exact and approximate approaches to find minimal NAP specifications.
arXiv Detail & Related papers (2024-04-06T15:31:20Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - $p$-DkNN: Out-of-Distribution Detection Through Statistical Testing of
Deep Representations [32.99800144249333]
We introduce $p$-DkNN, a novel inference procedure that takes a trained deep neural network and analyzes the similarity structures of its intermediate hidden representations.
We find that $p$-DkNN forces adaptive attackers crafting adversarial examples, a form of worst-case OOD inputs, to introduce semantically meaningful changes to the inputs.
arXiv Detail & Related papers (2022-07-25T21:42:08Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Unifying supervised learning and VAEs -- coverage, systematics and
goodness-of-fit in normalizing-flow based neural network models for
astro-particle reconstructions [0.0]
Statistical uncertainties, coverage, systematic uncertainties or a goodness-of-fit measure are often not calculated.
We show that a KL-divergence objective of the joint distribution of data and labels allows to unify supervised learning and variational autoencoders.
We discuss how to calculate coverage probabilities without numerical integration for specific "base-ordered" contours.
arXiv Detail & Related papers (2020-08-13T11:28:57Z) - Increasing Trustworthiness of Deep Neural Networks via Accuracy
Monitoring [20.456742449675904]
Inference accuracy of deep neural networks (DNNs) is a crucial performance metric, but can vary greatly in practice subject to actual test datasets.
This has raised significant concerns with trustworthiness of DNNs, especially in safety-critical applications.
We propose a neural network-based accuracy monitor model, which only takes the deployed DNN's softmax probability output as its input.
arXiv Detail & Related papers (2020-07-03T03:09:36Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.