Reachability Analysis of Convolutional Neural Networks
- URL: http://arxiv.org/abs/2106.12074v1
- Date: Tue, 22 Jun 2021 21:42:00 GMT
- Title: Reachability Analysis of Convolutional Neural Networks
- Authors: Xiaodong Yang, Tomoya Yamaguchi, Hoang-Dung Tran, Bardh Hoxha, Taylor
T Johnson, Danil Prokhorov
- Abstract summary: We propose an approach to compute the exact reachable sets of a network given an input domain.
Our approach is also capable of backtracking to the input domain given an output reachable set.
An approach for fast analysis is also introduced, which conducts fast computation of reachable sets.
- Score: 10.384532888747993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional neural networks have been widely employed as an effective
technique to handle complex and practical problems. However, one of the
fundamental problems is the lack of formal methods to analyze their behavior.
To address this challenge, we propose an approach to compute the exact
reachable sets of a network given an input domain, where the reachable set is
represented by the face lattice structure. Besides the computation of reachable
sets, our approach is also capable of backtracking to the input domain given an
output reachable set. Therefore, a full analysis of a network's behavior can be
realized. In addition, an approach for fast analysis is also introduced, which
conducts fast computation of reachable sets by considering selected sensitive
neurons in each layer. The exact pixel-level reachability analysis method is
evaluated on a CNN for the CIFAR10 dataset and compared to related works. The
fast analysis method is evaluated over a CNN CIFAR10 dataset and VGG16
architecture for the ImageNet dataset.
Related papers
- Front-propagation Algorithm: Explainable AI Technique for Extracting Linear Function Approximations from Neural Networks [0.0]
This paper introduces the front-propagation algorithm, a novel AI technique designed to elucidate the decision-making logic of deep neural networks.
Unlike other popular explainability algorithms such as Integrated Gradients or Shapley Values, the proposed algorithm is able to extract an accurate and consistent linear function explanation of the network.
We demonstrate its efficacy in providing accurate linear functions with three different neural network architectures trained on publicly available benchmark datasets.
arXiv Detail & Related papers (2024-05-25T14:50:23Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - A Simple and Efficient Sampling-based Algorithm for General Reachability
Analysis [32.488975902387395]
General-purpose reachability analysis remains a notoriously challenging problem with applications ranging from neural network verification to safety analysis of dynamical systems.
By sampling inputs, evaluating their images in the true reachable set, and taking their $epsilon$-padded convex hull as a set estimator, this algorithm applies to general problem settings and is simple to implement.
This analysis informs algorithmic design to obtain an $epsilon$-close reachable set approximation with high probability.
On a neural network verification task, we show that this approach is more accurate and significantly faster than prior work.
arXiv Detail & Related papers (2021-12-10T18:56:16Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Progressive Spatio-Temporal Graph Convolutional Network for
Skeleton-Based Human Action Recognition [97.14064057840089]
We propose a method to automatically find a compact and problem-specific network for graph convolutional networks in a progressive manner.
Experimental results on two datasets for skeleton-based human action recognition indicate that the proposed method has competitive or even better classification performance.
arXiv Detail & Related papers (2020-11-11T09:57:49Z) - Learning for Integer-Constrained Optimization through Neural Networks
with Limited Training [28.588195947764188]
We introduce a symmetric and decomposed neural network structure, which is fully interpretable in terms of the functionality of its constituent components.
By taking advantage of the underlying pattern of the integer constraint, the introduced neural network offers superior generalization performance with limited training.
We show that the introduced decomposed approach can be further extended to semi-decomposed frameworks.
arXiv Detail & Related papers (2020-11-10T21:17:07Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Verification of Deep Convolutional Neural Networks Using ImageStars [10.44732293654293]
Convolutional Neural Networks (CNN) have redefined the state-of-the-art in many real-world applications.
CNNs are vulnerable to adversarial attacks, where slight changes to their inputs may lead to sharp changes in their output.
We describe a set-based framework that successfully deals with real-world CNNs, such as VGG16 and VGG19, that have high accuracy on ImageNet.
arXiv Detail & Related papers (2020-04-12T00:37:21Z) - Reachability Analysis for Feed-Forward Neural Networks using Face
Lattices [10.838397735788245]
We propose a parallelizable technique to compute exact reachable sets of a neural network to an input set.
Our approach is capable of constructing the complete input set given an output set, so that any input that leads to safety violation can be tracked.
arXiv Detail & Related papers (2020-03-02T22:23:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.