Learn Layer-wise Connections in Graph Neural Networks
- URL: http://arxiv.org/abs/2112.13585v1
- Date: Mon, 27 Dec 2021 09:33:22 GMT
- Title: Learn Layer-wise Connections in Graph Neural Networks
- Authors: Lanning Wei, Huan Zhao, Zhiqiang He
- Abstract summary: We propose a framework LLC (Learn Layer-wise Connections) based on neural architecture search (NAS) to learn adaptive connections among intermediate layers in GNNs.
LLC contains one novel search space which consists of 3 types of blocks and learnable connections, and one differentiable search algorithm to enable the efficient search process.
Extensive experiments on five real-world datasets are conducted, and the results show that the searched layer-wise connections can not only improve the performance but also alleviate the over-smoothing problem.
- Score: 12.363386808994079
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, Graph Neural Networks (GNNs) have shown superior performance
on diverse applications on real-world datasets. To improve the model capacity
and alleviate the over-smoothing problem, several methods proposed to
incorporate the intermediate layers by layer-wise connections. However, due to
the highly diverse graph types, the performance of existing methods vary on
diverse graphs, leading to a need for data-specific layer-wise connection
methods. To address this problem, we propose a novel framework LLC (Learn
Layer-wise Connections) based on neural architecture search (NAS) to learn
adaptive connections among intermediate layers in GNNs. LLC contains one novel
search space which consists of 3 types of blocks and learnable connections, and
one differentiable search algorithm to enable the efficient search process.
Extensive experiments on five real-world datasets are conducted, and the
results show that the searched layer-wise connections can not only improve the
performance but also alleviate the over-smoothing problem.
Related papers
- Layer-wise Linear Mode Connectivity [52.6945036534469]
Averaging neural network parameters is an intuitive method for the knowledge of two independent models.
It is most prominently used in federated learning.
We analyse the performance of the models that result from averaging single, or groups.
arXiv Detail & Related papers (2023-07-13T09:39:10Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - AGNN: Alternating Graph-Regularized Neural Networks to Alleviate
Over-Smoothing [29.618952407794776]
We propose an Alternating Graph-regularized Neural Network (AGNN) composed of Graph Convolutional Layer (GCL) and Graph Embedding Layer (GEL)
GEL is derived from the graph-regularized optimization containing Laplacian embedding term, which can alleviate the over-smoothing problem.
AGNN is evaluated via a large number of experiments including performance comparison with some multi-layer or multi-order graph neural networks.
arXiv Detail & Related papers (2023-04-14T09:20:03Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Automatic Relation-aware Graph Network Proliferation [182.30735195376792]
We propose Automatic Relation-aware Graph Network Proliferation (ARGNP) for efficiently searching GNNs.
These operations can extract hierarchical node/relational information and provide anisotropic guidance for message passing on a graph.
Experiments on six datasets for four graph learning tasks demonstrate that GNNs produced by our method are superior to the current state-of-the-art hand-crafted and search-based GNNs.
arXiv Detail & Related papers (2022-05-31T10:38:04Z) - Bandit Sampling for Multiplex Networks [8.771092194928674]
We propose an algorithm for scalable learning on multiplex networks with a large number of layers.
Online learning algorithm learns how to sample relevant neighboring layers so that only the layers with relevant information are aggregated during training.
We present experimental results on both synthetic and real-world scenarios.
arXiv Detail & Related papers (2022-02-08T03:26:34Z) - Edge-featured Graph Neural Architecture Search [131.4361207769865]
We propose Edge-featured Graph Neural Architecture Search to find the optimal GNN architecture.
Specifically, we design rich entity and edge updating operations to learn high-order representations.
We show EGNAS can search better GNNs with higher performance than current state-of-the-art human-designed and searched-based GNNs.
arXiv Detail & Related papers (2021-09-03T07:53:18Z) - Pooling Architecture Search for Graph Classification [36.728077433219916]
Graph neural networks (GNNs) are designed to learn node-level representation based on neighborhood aggregation schemes.
Pooling methods are applied after the aggregation operation to generate coarse-grained graphs.
It is a challenging problem to design a universal pooling architecture to perform well in most cases.
We propose to use neural architecture search (NAS) to search for adaptive pooling architectures for graph classification.
arXiv Detail & Related papers (2021-08-24T09:03:03Z) - RAN-GNNs: breaking the capacity limits of graph neural networks [43.66682619000099]
Graph neural networks have become a staple in problems addressing learning and analysis of data defined over graphs.
Recent works attribute this to the need to consider multiple neighborhood sizes at the same time and adaptively tune them.
We show that employing a randomly-wired architecture can be a more effective way to increase the capacity of the network and obtain richer representations.
arXiv Detail & Related papers (2021-03-29T12:34:36Z) - Clustering multilayer graphs with missing nodes [4.007017852999008]
Clustering is a fundamental problem in network analysis where the goal is to regroup nodes with similar connectivity profiles.
We propose a new framework that allows for layers to be defined on different sets of nodes.
arXiv Detail & Related papers (2021-03-04T18:56:59Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.