Traversing the Local Polytopes of ReLU Neural Networks: A Unified
Approach for Network Verification
- URL: http://arxiv.org/abs/2111.08922v1
- Date: Wed, 17 Nov 2021 06:12:39 GMT
- Title: Traversing the Local Polytopes of ReLU Neural Networks: A Unified
Approach for Network Verification
- Authors: Shaojie Xu, Joel Vaughan, Jie Chen, Aijun Zhang, Agus Sudjianto
- Abstract summary: neural networks (NNs) with ReLU activation functions have found success in a wide range of applications.
Previous works to examine robustness and to improve interpretability partially exploited the piecewise linear function form of ReLU NNs.
In this paper, we explore the unique topological structure that ReLU NNs create in the input space, identifying the adjacency among the partitioned local polytopes.
- Score: 6.71092092685492
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although neural networks (NNs) with ReLU activation functions have found
success in a wide range of applications, their adoption in risk-sensitive
settings has been limited by the concerns on robustness and interpretability.
Previous works to examine robustness and to improve interpretability partially
exploited the piecewise linear function form of ReLU NNs. In this paper, we
explore the unique topological structure that ReLU NNs create in the input
space, identifying the adjacency among the partitioned local polytopes and
developing a traversing algorithm based on this adjacency. Our polytope
traversing algorithm can be adapted to verify a wide range of network
properties related to robustness and interpretability, providing an unified
approach to examine the network behavior. As the traversing algorithm
explicitly visits all local polytopes, it returns a clear and full picture of
the network behavior within the traversed region. The time and space complexity
of the traversing algorithm is determined by the number of a ReLU NN's
partitioning hyperplanes passing through the traversing region.
Related papers
- Scalable spectral representations for network multiagent control [53.631272539560435]
A popular model for multi-agent control, Network Markov Decision Processes (MDPs) pose a significant challenge to efficient learning.
We first derive scalable spectral local representations for network MDPs, which induces a network linear subspace for the local $Q$-function of each agent.
We design a scalable algorithmic framework for continuous state-action network MDPs, and provide end-to-end guarantees for the convergence of our algorithm.
arXiv Detail & Related papers (2024-10-22T17:45:45Z) - Joint Admission Control and Resource Allocation of Virtual Network Embedding via Hierarchical Deep Reinforcement Learning [69.00997996453842]
We propose a deep Reinforcement Learning approach to learn a joint Admission Control and Resource Allocation policy for virtual network embedding.
We show that HRL-ACRA outperforms state-of-the-art baselines in terms of both the acceptance ratio and long-term average revenue.
arXiv Detail & Related papers (2024-06-25T07:42:30Z) - Network Inversion of Binarised Neural Nets [3.5571131514746837]
Network inversion plays a pivotal role in unraveling the black-box nature of input to output mappings in neural networks.
This paper introduces a novel approach to invert a trained BNN by encoding it into a CNF formula that captures the network's structure.
arXiv Detail & Related papers (2024-02-19T09:39:54Z) - The Evolution of the Interplay Between Input Distributions and Linear
Regions in Networks [20.97553518108504]
We count the number of linear convex regions in deep neural networks based on ReLU.
In particular, we prove that for any one-dimensional input, there exists a minimum threshold for the number of neurons required to express it.
We also unveil the iterative refinement process of decision boundaries in ReLU networks during training.
arXiv Detail & Related papers (2023-10-28T15:04:53Z) - The Influence of Network Structural Preference on Node Classification
and Link Prediction [0.0]
This work introduces a new feature abstraction method, namely the Transition Probabilities Matrix (TPM)
The success of the proposed embedding method is tested on node identification/classification and link prediction on three commonly used real-world networks.
arXiv Detail & Related papers (2022-08-07T12:56:28Z) - Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient [62.660451283548724]
ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
arXiv Detail & Related papers (2021-10-18T06:14:28Z) - Clustering-Based Interpretation of Deep ReLU Network [17.234442722611803]
We recognize that the non-linear behavior of the ReLU function gives rise to a natural clustering.
We propose a method to increase the level of interpretability of a fully connected feedforward ReLU neural network.
arXiv Detail & Related papers (2021-10-13T09:24:11Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - Hierarchical Verification for Adversarial Robustness [89.30150585592648]
We introduce a new framework for the exact point-wise $ell_p$ robustness verification problem.
LayerCert exploits the layer-wise geometric structure of deep feed-forward networks with rectified linear activations (ReLU networks)
We show that LayerCert provably reduces the number and size of the convex programs that one needs to solve compared to GeoCert.
arXiv Detail & Related papers (2020-07-23T07:03:05Z) - Reachability Analysis for Feed-Forward Neural Networks using Face
Lattices [10.838397735788245]
We propose a parallelizable technique to compute exact reachable sets of a neural network to an input set.
Our approach is capable of constructing the complete input set given an output set, so that any input that leads to safety violation can be tracked.
arXiv Detail & Related papers (2020-03-02T22:23:57Z) - Dense Residual Network: Enhancing Global Dense Feature Flow for
Character Recognition [75.4027660840568]
This paper explores how to enhance the local and global dense feature flow by exploiting hierarchical features fully from all the convolution layers.
Technically, we propose an efficient and effective CNN framework, i.e., Fast Dense Residual Network (FDRN) for text recognition.
arXiv Detail & Related papers (2020-01-23T06:55:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.