AutoGraph: Automated Graph Neural Network
- URL: http://arxiv.org/abs/2011.11288v1
- Date: Mon, 23 Nov 2020 09:04:17 GMT
- Title: AutoGraph: Automated Graph Neural Network
- Authors: Yaoman Li and Irwin King
- Abstract summary: We propose a method to automate the deep Graph Neural Networks (GNNs) design.
In our proposed method, we add a new type of skip connection to the GNNs search space to encourage feature reuse.
We also allow our evolutionary algorithm to increase the layers of GNNs during the evolution to generate deeper networks.
- Score: 45.94642721490744
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graphs play an important role in many applications. Recently, Graph Neural
Networks (GNNs) have achieved promising results in graph analysis tasks. Some
state-of-the-art GNN models have been proposed, e.g., Graph Convolutional
Networks (GCNs), Graph Attention Networks (GATs), etc. Despite these successes,
most of the GNNs only have shallow structure. This causes the low expressive
power of the GNNs. To fully utilize the power of the deep neural network, some
deep GNNs have been proposed recently. However, the design of deep GNNs
requires significant architecture engineering. In this work, we propose a
method to automate the deep GNNs design. In our proposed method, we add a new
type of skip connection to the GNNs search space to encourage feature reuse and
alleviate the vanishing gradient problem. We also allow our evolutionary
algorithm to increase the layers of GNNs during the evolution to generate
deeper networks. We evaluate our method in the graph node classification task.
The experiments show that the GNNs generated by our method can obtain
state-of-the-art results in Cora, Citeseer, Pubmed and PPI datasets.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Geodesic Graph Neural Network for Efficient Graph Representation
Learning [34.047527874184134]
We propose an efficient GNN framework called Geodesic GNN (GDGNN)
It injects conditional relationships between nodes into the model without labeling.
Conditioned on the geodesic representations, GDGNN is able to generate node, link, and graph representations that carry much richer structural information than plain GNNs.
arXiv Detail & Related papers (2022-10-06T02:02:35Z) - Addressing Over-Smoothing in Graph Neural Networks via Deep Supervision [13.180922099929765]
Deep graph neural networks (GNNs) suffer from over-smoothing when the number of layers increases.
We propose DSGNNs enhanced with deep supervision where representations learned at all layers are used for training.
We show that DSGNNs are resilient to over-smoothing and can outperform competitive benchmarks on node and graph property prediction problems.
arXiv Detail & Related papers (2022-02-25T06:05:55Z) - KerGNNs: Interpretable Graph Neural Networks with Graph Kernels [14.421535610157093]
Graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks.
We propose a novel GNN framework, termed textit Kernel Graph Neural Networks (KerGNNs)
KerGNNs integrate graph kernels into the message passing process of GNNs.
We show that our method achieves competitive performance compared with existing state-of-the-art methods.
arXiv Detail & Related papers (2022-01-03T06:16:30Z) - Network In Graph Neural Network [9.951298152023691]
We present a model-agnostic methodology that allows arbitrary GNN models to increase their model capacity by making the model deeper.
Instead of adding or widening GNN layers, NGNN deepens a GNN model by inserting non-linear feedforward neural network layer(s) within each GNN layer.
arXiv Detail & Related papers (2021-11-23T03:58:56Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.