VeriFlow: Modeling Distributions for Neural Network Verification
- URL: http://arxiv.org/abs/2406.14265v2
- Date: Fri, 26 Sep 2025 10:57:18 GMT
- Title: VeriFlow: Modeling Distributions for Neural Network Verification
- Authors: Faried Abu Zaid, Daniel Neider, Mustafa Yalçıner,
- Abstract summary: Formal verification has emerged as a promising method to ensure the safety and reliability of neural networks.<n>We propose the VeriFlow architecture as a flow-based density model tailored to allow any verification approach to restrict its search to some data distribution of interest.
- Score: 3.510536859655114
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Formal verification has emerged as a promising method to ensure the safety and reliability of neural networks. However, many relevant properties, such as fairness or global robustness, pertain to the entire input space. If one applies verification techniques naively, the neural network is checked even on inputs that do not occur in the real world and have no meaning. To tackle this shortcoming, we propose the VeriFlow architecture as a flow-based density model tailored to allow any verification approach to restrict its search to some data distribution of interest. We argue that our architecture is particularly well suited for this purpose because of two major properties. First, we show that the transformation that is defined by our model is piecewise affine. Therefore, the model allows the usage of verifiers based on constraint solving with linear arithmetic. Second, upper density level sets (UDL) of the data distribution are definable via linear constraints in the latent space. As a consequence, representations of UDLs specified by a given probability are effectively computable in the latent space. This property allows for effective verification with a fine-grained, probabilistically interpretable control of how a-typical the inputs subject to verification are.
Related papers
- TorchLean: Formalizing Neural Networks in Lean [71.68907600404513]
We introduce TorchLean, a framework that treats learned models as first-class mathematical objects with a single, precise semantics shared by execution and verification.<n>We validate TorchLean end-to-end on certified robustness, physics-informed residual bounds for PINNs, and Lyapunov-style neural controller verification.
arXiv Detail & Related papers (2026-02-26T05:11:44Z) - Efficiently Verifiable Proofs of Data Attribution [9.05608916348947]
We propose an interactive verification paradigm for data attribution.<n>We provide formal completeness, soundness, and efficiency guarantees in the sense of Probably-Approximately-Correct (PAC) verification.
arXiv Detail & Related papers (2025-08-14T17:36:01Z) - Out of the Shadows: Exploring a Latent Space for Neural Network Verification [8.97708612393722]
We present an efficient verification tool for neural networks that uses our iterative refinement to significantly reduce the number of subproblems in a branch-and-bound procedure.<n>We demonstrate that our tool achieves competitive performance, which would place it among the top-ranking tools of the last neural network verification competition.
arXiv Detail & Related papers (2025-05-23T13:05:07Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Validation Diagnostics for SBI algorithms based on Normalizing Flows [55.41644538483948]
This work proposes easy to interpret validation diagnostics for multi-dimensional conditional (posterior) density estimators based on NF.
It also offers theoretical guarantees based on results of local consistency.
This work should help the design of better specified models or drive the development of novel SBI-algorithms.
arXiv Detail & Related papers (2022-11-17T15:48:06Z) - Toward Reliable Neural Specifications [3.2722498341029653]
Existing specifications for neural networks are in the paradigm of data as specification.
We propose a new family of specifications called neural representation as specification.
We show that by using NAP, we can verify the prediction of the entire input space, while still recalling 84% of the data.
arXiv Detail & Related papers (2022-10-28T13:21:28Z) - $p$-DkNN: Out-of-Distribution Detection Through Statistical Testing of
Deep Representations [32.99800144249333]
We introduce $p$-DkNN, a novel inference procedure that takes a trained deep neural network and analyzes the similarity structures of its intermediate hidden representations.
We find that $p$-DkNN forces adaptive attackers crafting adversarial examples, a form of worst-case OOD inputs, to introduce semantically meaningful changes to the inputs.
arXiv Detail & Related papers (2022-07-25T21:42:08Z) - Uncertainty quantification of two-phase flow in porous media via
coupled-TgNN surrogate model [6.705438773768439]
Uncertainty quantification (UQ) of subsurface two-phase flow usually requires numerous executions of forward simulations under varying conditions.
In this work, a novel coupled theory-guided neural network (TgNN) based surrogate model is built to facilitate efficiency under the premise of satisfactory accuracy.
arXiv Detail & Related papers (2022-05-28T02:33:46Z) - Uncertainty Modeling for Out-of-Distribution Generalization [56.957731893992495]
We argue that the feature statistics can be properly manipulated to improve the generalization ability of deep learning models.
Common methods often consider the feature statistics as deterministic values measured from the learned features.
We improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training.
arXiv Detail & Related papers (2022-02-08T16:09:12Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Certifying Model Accuracy under Distribution Shifts [151.67113334248464]
We present provable robustness guarantees on the accuracy of a model under bounded Wasserstein shifts of the data distribution.
We show that a simple procedure that randomizes the input of the model within a transformation space is provably robust to distributional shifts under the transformation.
arXiv Detail & Related papers (2022-01-28T22:03:50Z) - Verifying Low-dimensional Input Neural Networks via Input Quantization [12.42030531015912]
This paper revisits the original problem of verifying ACAS Xu networks.
We propose to prepend an input quantization layer to the network.
Our technique can deliver exact verification results immune to floating-point error.
arXiv Detail & Related papers (2021-08-18T03:42:05Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Deconvolutional Density Network: Free-Form Conditional Density
Estimation [6.805003206706124]
A neural network can be used to compute the output distribution explicitly.
We show the benefits of modeling free-form distributions using deconvolution.
We compare our method to a number of other density-estimation approaches.
arXiv Detail & Related papers (2021-05-29T20:09:25Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - DoLFIn: Distributions over Latent Features for Interpretability [8.807587076209568]
We propose a novel strategy for achieving interpretability in neural network models.
Our approach builds on the success of using probability as the central quantity.
We show that DoLFIn not only provides interpretable solutions, but even slightly outperforms the classical CNN and BiLSTM text classification.
arXiv Detail & Related papers (2020-11-10T18:32:53Z) - Unifying supervised learning and VAEs -- coverage, systematics and
goodness-of-fit in normalizing-flow based neural network models for
astro-particle reconstructions [0.0]
Statistical uncertainties, coverage, systematic uncertainties or a goodness-of-fit measure are often not calculated.
We show that a KL-divergence objective of the joint distribution of data and labels allows to unify supervised learning and variational autoencoders.
We discuss how to calculate coverage probabilities without numerical integration for specific "base-ordered" contours.
arXiv Detail & Related papers (2020-08-13T11:28:57Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.