Parallel Algorithms for Exact Enumeration of Deep Neural Network
Activation Regions
- URL: http://arxiv.org/abs/2403.00860v1
- Date: Thu, 29 Feb 2024 20:48:39 GMT
- Title: Parallel Algorithms for Exact Enumeration of Deep Neural Network
Activation Regions
- Authors: Sabrina Drammis, Bowen Zheng, Karthik Srinivasan, Robert C. Berwick,
Nancy A. Lynch, Robert Ajemian
- Abstract summary: A feedforward neural network constructs a mapping from inputs to outputs by partitioning its input space into a set of convex regions.
We present parallel algorithms for exact enumeration in deep (and shallow) neural networks.
- Score: 2.782882026947924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A feedforward neural network using rectified linear units constructs a
mapping from inputs to outputs by partitioning its input space into a set of
convex regions where points within a region share a single affine
transformation. In order to understand how neural networks work, when and why
they fail, and how they compare to biological intelligence, we need to
understand the organization and formation of these regions. Step one is to
design and implement algorithms for exact region enumeration in networks beyond
toy examples.
In this work, we present parallel algorithms for exact enumeration in deep
(and shallow) neural networks. Our work has three main contributions: (1) we
present a novel algorithm framework and parallel algorithms for region
enumeration; (2) we implement one of our algorithms on a variety of network
architectures and experimentally show how the number of regions dictates
runtime; and (3) we show, using our algorithm's output, how the dimension of a
region's affine transformation impacts further partitioning of the region by
deeper layers.
To our knowledge, we run our implemented algorithm on networks larger than
all of the networks used in the existing region enumeration literature.
Further, we experimentally demonstrate the importance of parallelism for region
enumeration of any reasonably sized network.
Related papers
- The Evolution of the Interplay Between Input Distributions and Linear
Regions in Networks [20.97553518108504]
We count the number of linear convex regions in deep neural networks based on ReLU.
In particular, we prove that for any one-dimensional input, there exists a minimum threshold for the number of neurons required to express it.
We also unveil the iterative refinement process of decision boundaries in ReLU networks during training.
arXiv Detail & Related papers (2023-10-28T15:04:53Z) - Connections between Operator-splitting Methods and Deep Neural Networks
with Applications in Image Segmentation [7.668812831777923]
How to make connections between deep neural networks and mathematical algorithms is still under development.
We show an algorithmic explanation for deep neural networks, especially in their connections with operator splitting.
We propose two networks inspired by operator-splitting methods solving the Potts model.
arXiv Detail & Related papers (2023-07-18T08:06:14Z) - A singular Riemannian geometry approach to Deep Neural Networks II.
Reconstruction of 1-D equivalence classes [78.120734120667]
We build the preimage of a point in the output manifold in the input space.
We focus for simplicity on the case of neural networks maps from n-dimensional real spaces to (n - 1)-dimensional real spaces.
arXiv Detail & Related papers (2021-12-17T11:47:45Z) - Unsupervised Domain-adaptive Hash for Networks [81.49184987430333]
Domain-adaptive hash learning has enjoyed considerable success in the computer vision community.
We develop an unsupervised domain-adaptive hash learning method for networks, dubbed UDAH.
arXiv Detail & Related papers (2021-08-20T12:09:38Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - A General Computational Framework to Measure the Expressiveness of
Complex Networks Using a Tighter Upper Bound of Linear Regions [13.030269373463168]
The upper bound of regions number partitioned by a rec-tifier network, instead of the number itself, is a more practical measurement ofexpressiveness of a DNN.
We propose a new and tighter up-per bound regions number for any network structures.
Our experiments show our upper bound is tighterthan existing ones, and explain why skip connections and residual structures can improve network performance.
arXiv Detail & Related papers (2020-12-08T14:01:20Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Inductive Graph Embeddings through Locality Encodings [0.42970700836450487]
We look at the problem of finding inductive network embeddings in large networks without domain-dependent node/edge attributes.
We propose to use a set of basic predefined local encodings as the basis of a learning algorithm.
This method achieves state-of-the-art performance in tasks such as role detection, link prediction and node classification.
arXiv Detail & Related papers (2020-09-26T13:09:11Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z) - Adaptive Region-Based Active Learning [57.78835999208091]
We present a new active learning algorithm that adaptively partitions the input space into a finite number of regions.
We prove theoretical guarantees for both the generalization error and the label complexity of our algorithm.
We report the results of an extensive suite of experiments on several real-world datasets.
arXiv Detail & Related papers (2020-02-18T03:16:36Z) - Fast Geometric Projections for Local Robustness Certification [29.833035157227553]
We present a fast procedure for checking local robustness in feed-forward neural networks.
We show how the regions around a point can be analyzed using simple geometric projections.
arXiv Detail & Related papers (2020-02-12T00:21:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.