Learning How to Propagate Messages in Graph Neural Networks
- URL: http://arxiv.org/abs/2310.00697v1
- Date: Sun, 1 Oct 2023 15:09:59 GMT
- Title: Learning How to Propagate Messages in Graph Neural Networks
- Authors: Teng Xiao, Zhengyu Chen, Donglin Wang, and Suhang Wang
- Abstract summary: This paper studies the problem of learning message propagation strategies for graph neural networks (GNNs)
We introduce the optimal propagation steps as latent variables to help find the maximum-likelihood estimation of the GNN parameters.
Our proposed framework can effectively learn personalized and interpretable propagate strategies of messages in GNNs.
- Score: 55.2083896686782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies the problem of learning message propagation strategies for
graph neural networks (GNNs). One of the challenges for graph neural networks
is that of defining the propagation strategy. For instance, the choices of
propagation steps are often specialized to a single graph and are not
personalized to different nodes. To compensate for this, in this paper, we
present learning to propagate, a general learning framework that not only
learns the GNN parameters for prediction but more importantly, can explicitly
learn the interpretable and personalized propagate strategies for different
nodes and various types of graphs. We introduce the optimal propagation steps
as latent variables to help find the maximum-likelihood estimation of the GNN
parameters in a variational Expectation-Maximization (VEM) framework. Extensive
experiments on various types of graph benchmarks demonstrate that our proposed
framework can significantly achieve better performance compared with the
state-of-the-art methods, and can effectively learn personalized and
interpretable propagate strategies of messages in GNNs.
Related papers
- DiRW: Path-Aware Digraph Learning for Heterophily [23.498557237805414]
Graph neural network (GNN) has emerged as a powerful representation learning tool for graph-structured data.
We propose Directed Random Walk (DiRW), which can be viewed as a plug-and-play strategy or an innovative neural architecture.
DiRW incorporates a direction-aware path sampler optimized from perspectives of walk probability, length, and number.
arXiv Detail & Related papers (2024-10-14T09:26:56Z) - Next Level Message-Passing with Hierarchical Support Graphs [20.706469085872516]
Hierarchical Support Graph (HSG) is a framework for enhancing information flow in graphs, independent of the specific MPNN layers utilized.
We present a theoretical analysis of HSGs, investigate their empirical performance, and demonstrate that HSGs can surpass other methods augmented with virtual nodes.
arXiv Detail & Related papers (2024-06-22T13:57:09Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Towards Better Generalization with Flexible Representation of
Multi-Module Graph Neural Networks [0.27195102129094995]
We use a random graph generator to investigate how the graph size and structural properties affect the predictive performance of GNNs.
We present specific evidence that the average node degree is a key feature in determining whether GNNs can generalize to unseen graphs.
We propose a multi- module GNN framework that allows the network to adapt flexibly to new graphs by generalizing a single canonical nonlinear transformation over aggregated inputs.
arXiv Detail & Related papers (2022-09-14T12:13:59Z) - Scaling Up Graph Neural Networks Via Graph Coarsening [18.176326897605225]
Scalability of graph neural networks (GNNs) is one of the major challenges in machine learning.
In this paper, we propose to use graph coarsening for scalable training of GNNs.
We show that, simply applying off-the-shelf coarsening methods, we can reduce the number of nodes by up to a factor of ten without causing a noticeable downgrade in classification accuracy.
arXiv Detail & Related papers (2021-06-09T15:46:17Z) - HeteGCN: Heterogeneous Graph Convolutional Networks for Text
Classification [1.9739269019020032]
We propose a heterogeneous graph convolutional network (HeteGCN) modeling approach.
The main idea is to learn feature embeddings and derive document embeddings using a HeteGCN architecture.
In effect, the number of model parameters is reduced significantly, enabling faster training and improving performance in small labeled training set scenario.
arXiv Detail & Related papers (2020-08-19T12:24:35Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.