Reachability Analysis for Feed-Forward Neural Networks using Face
Lattices
- URL: http://arxiv.org/abs/2003.01226v1
- Date: Mon, 2 Mar 2020 22:23:57 GMT
- Title: Reachability Analysis for Feed-Forward Neural Networks using Face
Lattices
- Authors: Xiaodong Yang, Hoang-Dung Tran, Weiming Xiang, Taylor Johnson
- Abstract summary: We propose a parallelizable technique to compute exact reachable sets of a neural network to an input set.
Our approach is capable of constructing the complete input set given an output set, so that any input that leads to safety violation can be tracked.
- Score: 10.838397735788245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have been widely applied as an effective approach to
handle complex and practical problems. However, one of the most fundamental
open problems is the lack of formal methods to analyze the safety of their
behaviors. To address this challenge, we propose a parallelizable technique to
compute exact reachable sets of a neural network to an input set. Our method
currently focuses on feed-forward neural networks with ReLU activation
functions. One of the primary challenges for polytope-based approaches is
identifying the intersection between intermediate polytopes and hyperplanes
from neurons. In this regard, we present a new approach to construct the
polytopes with the face lattice, a complete combinatorial structure. The
correctness and performance of our methodology are evaluated by verifying the
safety of ACAS Xu networks and other benchmarks. Compared to state-of-the-art
methods such as Reluplex, Marabou, and NNV, our approach exhibits a
significantly higher efficiency. Additionally, our approach is capable of
constructing the complete input set given an output set, so that any input that
leads to safety violation can be tracked.
Related papers
- A neural network approach for solving the Monge-Ampère equation with transport boundary condition [0.0]
This paper introduces a novel neural network-based approach to solving the Monge-Ampere equation with the transport boundary condition.
We leverage multilayer perceptron networks to learn approximate solutions by minimizing a loss function that encompasses the equation's residual, boundary conditions, and convexity constraints.
arXiv Detail & Related papers (2024-10-25T11:54:00Z) - An Efficient Learning-based Solver Comparable to Metaheuristics for the
Capacitated Arc Routing Problem [67.92544792239086]
We introduce an NN-based solver to significantly narrow the gap with advanced metaheuristics.
First, we propose direction-aware facilitating attention model (DaAM) to incorporate directionality into the embedding process.
Second, we design a supervised reinforcement learning scheme that involves supervised pre-training to establish a robust initial policy.
arXiv Detail & Related papers (2024-03-11T02:17:42Z) - Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Automated Reachability Analysis of Neural Network-Controlled Systems via
Adaptive Polytopes [2.66512000865131]
We develop a new approach for over-approximating the reachable sets of neural network dynamical systems using adaptive template polytopes.
We illustrate the utility of the proposed approach in the reachability analysis of linear systems driven by neural network controllers.
arXiv Detail & Related papers (2022-12-14T23:49:53Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Imbedding Deep Neural Networks [0.0]
Continuous depth neural networks, such as Neural ODEs, have refashioned the understanding of residual neural networks in terms of non-linear vector-valued optimal control problems.
We propose a new approach which explicates the network's depth' as a fundamental variable, thus reducing the problem to a system of forward-facing initial value problems.
arXiv Detail & Related papers (2022-01-31T22:00:41Z) - Learning to Detect Critical Nodes in Sparse Graphs via Feature Importance Awareness [53.351863569314794]
The critical node problem (CNP) aims to find a set of critical nodes from a network whose deletion maximally degrades the pairwise connectivity of the residual network.
This work proposes a feature importance-aware graph attention network for node representation.
It combines it with dueling double deep Q-network to create an end-to-end algorithm to solve CNP for the first time.
arXiv Detail & Related papers (2021-12-03T14:23:05Z) - Reachability Analysis of Convolutional Neural Networks [10.384532888747993]
We propose an approach to compute the exact reachable sets of a network given an input domain.
Our approach is also capable of backtracking to the input domain given an output reachable set.
An approach for fast analysis is also introduced, which conducts fast computation of reachable sets.
arXiv Detail & Related papers (2021-06-22T21:42:00Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.