Effects of Graph Convolutions in Deep Networks
- URL: http://arxiv.org/abs/2204.09297v1
- Date: Wed, 20 Apr 2022 08:24:43 GMT
- Title: Effects of Graph Convolutions in Deep Networks
- Authors: Aseem Baranwal, Kimon Fountoulakis, Aukosh Jagannath
- Abstract summary: We present a rigorous theoretical understanding of the effects of graph convolutions in multi-layer networks.
We show that a single graph convolution expands the regime of the distance between the means where multi-layer networks can classify the data.
We provide both theoretical and empirical insights into the performance of graph convolutions placed in different combinations among the layers of a network.
- Score: 8.937905773981702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Convolutional Networks (GCNs) are one of the most popular architectures
that are used to solve classification problems accompanied by graphical
information. We present a rigorous theoretical understanding of the effects of
graph convolutions in multi-layer networks. We study these effects through the
node classification problem of a non-linearly separable Gaussian mixture model
coupled with a stochastic block model. First, we show that a single graph
convolution expands the regime of the distance between the means where
multi-layer networks can classify the data by a factor of at least
$1/\sqrt[4]{\mathbb{E}{\rm deg}}$, where $\mathbb{E}{\rm deg}$ denotes the
expected degree of a node. Second, we show that with a slightly stronger graph
density, two graph convolutions improve this factor to at least
$1/\sqrt[4]{n}$, where $n$ is the number of nodes in the graph. Finally, we
provide both theoretical and empirical insights into the performance of graph
convolutions placed in different combinations among the layers of a network,
concluding that the performance is mutually similar for all combinations of the
placement. We present extensive experiments on both synthetic and real-world
data that illustrate our results.
Related papers
- Learning to Approximate Adaptive Kernel Convolution on Graphs [4.434835769977399]
We propose a diffusion learning framework, where the range of feature aggregation is controlled by the scale of a diffusion kernel.
Our model is tested on various standard for node-wise classification for the state-of-the-art datasets performance.
It is also validated on a real-world brain network data for graph classifications to demonstrate its practicality for Alzheimer classification.
arXiv Detail & Related papers (2024-01-22T10:57:11Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Factorizable Graph Convolutional Networks [90.59836684458905]
We introduce a novel graph convolutional network (GCN) that explicitly disentangles intertwined relations encoded in a graph.
FactorGCN takes a simple graph as input, and disentangles it into several factorized graphs.
We evaluate the proposed FactorGCN both qualitatively and quantitatively on the synthetic and real-world datasets.
arXiv Detail & Related papers (2020-10-12T03:01:40Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Unsupervised Graph Representation by Periphery and Hierarchical
Information Maximization [18.7475578342125]
Invent of graph neural networks has improved the state-of-the-art for both node and the entire graph representation in a vector space.
For the entire graph representation, most of existing graph neural networks are trained on a graph classification loss in a supervised way.
We propose an unsupervised graph neural network to generate a vector representation of an entire graph in this paper.
arXiv Detail & Related papers (2020-06-08T15:50:40Z) - Graph Representation Learning Network via Adaptive Sampling [4.996520403438455]
Graph Attention Network (GAT) and GraphSAGE are neural network architectures that operate on graph-structured data.
One challenge raised by GraphSAGE is how to smartly combine neighbour features based on graph structure.
We propose a new architecture to address these issues that is more efficient and is capable of incorporating different edge type information.
arXiv Detail & Related papers (2020-06-08T14:36:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.