Lower and Upper Bounds for Numbers of Linear Regions of Graph
Convolutional Networks
- URL: http://arxiv.org/abs/2206.00228v1
- Date: Wed, 1 Jun 2022 04:32:23 GMT
- Title: Lower and Upper Bounds for Numbers of Linear Regions of Graph
Convolutional Networks
- Authors: Hao Chen, Yu Guang Wang, Huan Xiong
- Abstract summary: The number of linear regions has been considered a good measure for the expressivity of neural networks with piecewise linear activation.
We present some estimates for the number of linear regions of the classic graph convolutional networks (GCNs) with one layer and multiple-layer scenarios.
- Score: 11.338307976409707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The research for characterizing GNN expressiveness attracts much attention as
graph neural networks achieve a champion in the last five years. The number of
linear regions has been considered a good measure for the expressivity of
neural networks with piecewise linear activation. In this paper, we present
some estimates for the number of linear regions of the classic graph
convolutional networks (GCNs) with one layer and multiple-layer scenarios. In
particular, we obtain an optimal upper bound for the maximum number of linear
regions for one-layer GCNs, and the upper and lower bounds for multi-layer
GCNs. The simulated estimate shows that the true maximum number of linear
regions is possibly closer to our estimated lower bound. These results imply
that the number of linear regions of multi-layer GCNs is exponentially greater
than one-layer GCNs per parameter in general. This suggests that deeper GCNs
have more expressivity than shallow GCNs.
Related papers
- Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - MGNNI: Multiscale Graph Neural Networks with Implicit Layers [53.75421430520501]
implicit graph neural networks (GNNs) have been proposed to capture long-range dependencies in underlying graphs.
We introduce and justify two weaknesses of implicit GNNs: the constrained expressiveness due to their limited effective range for capturing long-range dependencies, and their lack of ability to capture multiscale information on graphs at multiple resolutions.
We propose a multiscale graph neural network with implicit layers (MGNNI) which is able to model multiscale structures on graphs and has an expanded effective range for capturing long-range dependencies.
arXiv Detail & Related papers (2022-10-15T18:18:55Z) - On the Number of Regions of Piecewise Linear Neural Networks [16.78532039510369]
Many feedforward neural networks (NNs) generate continuous and piecewise-linear (CPWL) mappings.
The number of these so-called linear regions offers a natural metric to characterize the expressiveness of CPWL NNs.
We introduce a complementary framework to estimate the average number of linear regions produced by a CPWL NN.
arXiv Detail & Related papers (2022-06-17T08:17:28Z) - Dissecting the Diffusion Process in Linear Graph Convolutional Networks [71.30132908130581]
Graph Convolutional Networks (GCNs) have attracted more and more attention in recent years.
Recent works show that a linear GCN can achieve comparable performance to the original non-linear GCN.
We propose Decoupled Graph Convolution (DGC) that decouples the terminal time and the feature propagation steps.
arXiv Detail & Related papers (2021-02-22T02:45:59Z) - A General Computational Framework to Measure the Expressiveness of
Complex Networks Using a Tighter Upper Bound of Linear Regions [13.030269373463168]
The upper bound of regions number partitioned by a rec-tifier network, instead of the number itself, is a more practical measurement ofexpressiveness of a DNN.
We propose a new and tighter up-per bound regions number for any network structures.
Our experiments show our upper bound is tighterthan existing ones, and explain why skip connections and residual structures can improve network performance.
arXiv Detail & Related papers (2020-12-08T14:01:20Z) - Bounding The Number of Linear Regions in Local Area for Neural Networks
with ReLU Activations [6.4817648240626005]
We present the first method to estimate the upper bound of the number of linear regions in any sphere in the input space of a given ReLU neural network.
Our experiments showed that, while training a neural network, the boundaries of the linear regions tend to move away from the training data points.
arXiv Detail & Related papers (2020-07-14T04:06:00Z) - DeeperGCN: All You Need to Train Deeper GCNs [66.64739331859226]
Graph Convolutional Networks (GCNs) have been drawing significant attention with the power of representation learning on graphs.
Unlike Convolutional Neural Networks (CNNs), which are able to take advantage of stacking very deep layers, GCNs suffer from vanishing gradient, over-smoothing and over-fitting issues when going deeper.
This paper proposes DeeperGCN that is capable of successfully and reliably training very deep GCNs.
arXiv Detail & Related papers (2020-06-13T23:00:22Z) - On the Number of Linear Regions of Convolutional Neural Networks [0.6206641883102021]
Deep CNNs have more powerful expressivity than their shallow counterparts, while CNNs have more expressivity than fully-connected NNs per parameter.
Our results suggest that deeper CNNs have more powerful expressivity than their shallow counterparts, while CNNs have more expressivity than fully-connected NNs per parameter.
arXiv Detail & Related papers (2020-06-01T14:38:05Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Infinitely Wide Graph Convolutional Networks: Semi-supervised Learning
via Gaussian Processes [144.6048446370369]
Graph convolutional neural networks(GCNs) have recently demonstrated promising results on graph-based semi-supervised classification.
We propose a GP regression model via GCNs(GPGC) for graph-based semi-supervised learning.
We conduct extensive experiments to evaluate GPGC and demonstrate that it outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2020-02-26T10:02:32Z) - Empirical Studies on the Properties of Linear Regions in Deep Neural
Networks [34.08593191989188]
A deep neural network (DNN) with piecewise linear activations can partition the input space into numerous small linear regions.
It is believed that the number of these regions represents the expressivity of the DNN.
We study their local properties, such as the inspheres, the directions of the corresponding hyperplanes, the decision boundaries, and the relevance of the surrounding regions.
arXiv Detail & Related papers (2020-01-04T12:47:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.