Distributionally Robust Statistical Verification with Imprecise Neural
Networks
- URL: http://arxiv.org/abs/2308.14815v3
- Date: Mon, 11 Dec 2023 23:57:50 GMT
- Title: Distributionally Robust Statistical Verification with Imprecise Neural
Networks
- Authors: Souradeep Dutta, Michele Caprio, Vivian Lin, Matthew Cleaveland, Kuk
Jin Jang, Ivan Ruchkin, Oleg Sokolsky, Insup Lee
- Abstract summary: A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems.
This paper proposes a novel approach based on a combination of active learning, uncertainty quantification, and neural network verification.
- Score: 4.094049541486327
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: A particularly challenging problem in AI safety is providing guarantees on
the behavior of high-dimensional autonomous systems. Verification approaches
centered around reachability analysis fail to scale, and purely statistical
approaches are constrained by the distributional assumptions about the sampling
process. Instead, we pose a distributionally robust version of the statistical
verification problem for black-box systems, where our performance guarantees
hold over a large family of distributions. This paper proposes a novel approach
based on a combination of active learning, uncertainty quantification, and
neural network verification. A central piece of our approach is an ensemble
technique called Imprecise Neural Networks, which provides the uncertainty to
guide active learning. The active learning uses an exhaustive neural-network
verification tool Sherlock to collect samples. An evaluation on multiple
physical simulators in the openAI gym Mujoco environments with
reinforcement-learned controllers demonstrates that our approach can provide
useful and scalable guarantees for high-dimensional systems.
Related papers
- Learning-Based Verification of Stochastic Dynamical Systems with Neural Network Policies [7.9898826915621965]
We use a verification procedure that trains another neural network, which acts as a certificate proving that the policy satisfies the task.
For reach-avoid tasks, it suffices to show that this certificate network is a reach-avoid supermartingale (RASM)
arXiv Detail & Related papers (2024-06-02T18:19:19Z) - LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks [52.46420522934253]
We introduce LoRA-Ensemble, a parameter-efficient deep ensemble method for self-attention networks.
By employing a single pre-trained self-attention network with weights shared across all members, we train member-specific low-rank matrices for the attention projections.
Our method exhibits superior calibration compared to explicit ensembles and achieves similar or better accuracy across various prediction tasks and datasets.
arXiv Detail & Related papers (2024-05-23T11:10:32Z) - An Analytic Solution to Covariance Propagation in Neural Networks [10.013553984400488]
This paper presents a sample-free moment propagation technique to accurately characterize the input-output distributions of neural networks.
A key enabler of our technique is an analytic solution for the covariance of random variables passed through nonlinear activation functions.
The wide applicability and merits of the proposed technique are shown in experiments analyzing the input-output distributions of trained neural networks and training Bayesian neural networks.
arXiv Detail & Related papers (2024-03-24T14:08:24Z) - Safe Multi-agent Learning via Trapping Regions [89.24858306636816]
We apply the concept of trapping regions, known from qualitative theory of dynamical systems, to create safety sets in the joint strategy space for decentralized learning.
We propose a binary partitioning algorithm for verification that candidate sets form trapping regions in systems with known learning dynamics, and a sampling algorithm for scenarios where learning dynamics are not known.
arXiv Detail & Related papers (2023-02-27T14:47:52Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Generating Probabilistic Safety Guarantees for Neural Network
Controllers [30.34898838361206]
We use a dynamics model to determine the output properties that must hold for a neural network controller to operate safely.
We develop an adaptive verification approach to efficiently generate an overapproximation of the neural network policy.
We show that our method is able to generate meaningful probabilistic safety guarantees for aircraft collision avoidance neural networks.
arXiv Detail & Related papers (2021-03-01T18:48:21Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Data-Driven Assessment of Deep Neural Networks with Random Input
Uncertainty [14.191310794366075]
We develop a data-driven optimization-based method capable of simultaneously certifying the safety of network outputs and localizing them.
We experimentally demonstrate the efficacy and tractability of the method on a deep ReLU network.
arXiv Detail & Related papers (2020-10-02T19:13:35Z) - SAMBA: Safe Model-Based & Active Reinforcement Learning [59.01424351231993]
SAMBA is a framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics.
We evaluate our algorithm on a variety of safe dynamical system benchmarks involving both low and high-dimensional state representations.
We provide intuition as to the effectiveness of the framework by a detailed analysis of our active metrics and safety constraints.
arXiv Detail & Related papers (2020-06-12T10:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.