Verifying Probabilistic Specifications with Functional Lagrangians
- URL: http://arxiv.org/abs/2102.09479v1
- Date: Thu, 18 Feb 2021 17:00:40 GMT
- Title: Verifying Probabilistic Specifications with Functional Lagrangians
- Authors: Leonard Berrada, Sumanth Dathathri, Krishnamurthy (Dj) Dvijotham,
Robert Stanforth, Rudy Bunel, Jonathan Uesato, Sven Gowal, M. Pawan Kumar
- Abstract summary: We propose a framework for verifying input-output specifications of neural networks using functional Lagrange multipliers.
We show that the framework provably leads to tight verification when a sufficiently flexible class of functional multipliers is chosen.
- Score: 47.81366702121604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a general framework for verifying input-output specifications of
neural networks using functional Lagrange multipliers that generalizes standard
Lagrangian duality. We derive theoretical properties of the framework, which
can handle arbitrary probabilistic specifications, showing that it provably
leads to tight verification when a sufficiently flexible class of functional
multipliers is chosen. With a judicious choice of the class of functional
multipliers, the framework can accommodate desired trade-offs between tightness
and complexity. We demonstrate empirically that the framework can handle a
diverse set of networks, including Bayesian neural networks with Gaussian
posterior approximations, MC-dropout networks, and verify specifications on
adversarial robustness and out-of-distribution(OOD) detection. Our framework
improves upon prior work in some settings and also generalizes to new
stochastic networks and probabilistic specifications, like distributionally
robust OOD detection.
Related papers
- ReLU Networks as Random Functions: Their Distribution in Probability Space [13.408904884821903]
This paper presents a novel framework for understanding trained ReLU networks as random, affine functions.
We derive the discrete probability distribution over the affine functions realizable by the network.
Our work provides a framework for understanding the behavior and performance of ReLU networks.
arXiv Detail & Related papers (2025-03-28T01:58:40Z) - Explainable Neural Networks with Guarantees: A Sparse Estimation Approach [11.142723510517778]
This paper introduces a novel approach to constructing an explainable neural network that harmonizes predictiveness and explainability.
Our model, termed SparXnet, is designed as a linear combination of a sparse set of jointly learned features.
Our research paves the way for further research on sparse and explainable neural networks with guarantee.
arXiv Detail & Related papers (2025-01-02T12:10:17Z) - Probabilistic Verification of ReLU Neural Networks via Characteristic
Functions [11.489187712465325]
We use ideas from probability theory in the frequency domain to provide probabilistic verification guarantees for ReLU neural networks.
We interpret a (deep) feedforward neural network as a discrete dynamical system over a finite horizon.
We obtain the corresponding cumulative distribution function of the output set, which can be used to check if the network is performing as expected.
arXiv Detail & Related papers (2022-12-03T05:53:57Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Semantic Probabilistic Layers for Neuro-Symbolic Learning [83.25785999205932]
We design a predictive layer for structured-output prediction (SOP)
It can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints.
Our Semantic Probabilistic Layer (SPL) can model intricate correlations, and hard constraints, over a structured output space.
arXiv Detail & Related papers (2022-06-01T12:02:38Z) - Bayesian Attention Belief Networks [59.183311769616466]
Attention-based neural networks have achieved state-of-the-art results on a wide range of tasks.
This paper introduces Bayesian attention belief networks, which construct a decoder network by modeling unnormalized attention weights.
We show that our method outperforms deterministic attention and state-of-the-art attention in accuracy, uncertainty estimation, generalization across domains, and adversarial attacks.
arXiv Detail & Related papers (2021-06-09T17:46:22Z) - Probabilistic Graph Attention Network with Conditional Kernels for
Pixel-Wise Prediction [158.88345945211185]
We present a novel approach that advances the state of the art on pixel-level prediction in a fundamental aspect, i.e. structured multi-scale features learning and fusion.
We propose a probabilistic graph attention network structure based on a novel Attention-Gated Conditional Random Fields (AG-CRFs) model for learning and fusing multi-scale representations in a principled manner.
arXiv Detail & Related papers (2021-01-08T04:14:29Z) - Probabilistic electric load forecasting through Bayesian Mixture Density
Networks [70.50488907591463]
Probabilistic load forecasting (PLF) is a key component in the extended tool-chain required for efficient management of smart energy grids.
We propose a novel PLF approach, framed on Bayesian Mixture Density Networks.
To achieve reliable and computationally scalable estimators of the posterior distributions, both Mean Field variational inference and deep ensembles are integrated.
arXiv Detail & Related papers (2020-12-23T16:21:34Z) - DebiNet: Debiasing Linear Models with Nonlinear Overparameterized Neural
Networks [11.04121146441257]
We incorporate over- parameterized neural networks into semi-parametric models to bridge the gap between inference and prediction.
We show the theoretical foundations that make this possible and demonstrate with numerical experiments.
We propose a framework, DebiNet, in which we plug-in arbitrary feature selection methods to our semi-parametric neural network.
arXiv Detail & Related papers (2020-11-01T04:12:53Z) - Tractably Modelling Dependence in Networks Beyond Exchangeability [0.0]
We study the estimation, clustering and degree behavior of the network in our setting.
This explores why and under which general conditions non-exchangeable network data can be described by a block model.
arXiv Detail & Related papers (2020-07-28T17:13:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.