Pathfinder Discovery Networks for Neural Message Passing
- URL: http://arxiv.org/abs/2010.12878v2
- Date: Tue, 16 Feb 2021 22:45:42 GMT
- Title: Pathfinder Discovery Networks for Neural Message Passing
- Authors: Benedek Rozemberczki, Peter Englert, Amol Kapoor, Martin Blais, Bryan
Perozzi
- Abstract summary: Pathfinder Discovery Networks (PDNs) are a method for jointly learning a message passing graph over a multiplex network.
PDNs inductively learn an aggregated weight for each edge, optimized to produce the best outcome for the downstream learning task.
- Score: 8.633430288397376
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we propose Pathfinder Discovery Networks (PDNs), a method for
jointly learning a message passing graph over a multiplex network with a
downstream semi-supervised model. PDNs inductively learn an aggregated weight
for each edge, optimized to produce the best outcome for the downstream
learning task. PDNs are a generalization of attention mechanisms on graphs
which allow flexible construction of similarity functions between nodes, edge
convolutions, and cheap multiscale mixing layers. We show that PDNs overcome
weaknesses of existing methods for graph attention (e.g. Graph Attention
Networks), such as the diminishing weight problem. Our experimental results
demonstrate competitive predictive performance on academic node classification
tasks. Additional results from a challenging suite of node classification
experiments show how PDNs can learn a wider class of functions than existing
baselines. We analyze the relative computational complexity of PDNs, and show
that PDN runtime is not considerably higher than static-graph models. Finally,
we discuss how PDNs can be used to construct an easily interpretable attention
mechanism that allows users to understand information propagation in the graph.
Related papers
- GraphRARE: Reinforcement Learning Enhanced Graph Neural Network with Relative Entropy [21.553180564868306]
GraphRARE is a framework built upon node relative entropy and deep reinforcement learning.
An innovative node relative entropy is used to measure mutual information between node pairs.
A deep reinforcement learning-based algorithm is developed to optimize the graph topology.
arXiv Detail & Related papers (2023-12-15T11:30:18Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Compressing Deep Graph Neural Networks via Adversarial Knowledge
Distillation [41.00398052556643]
We propose a novel Adversarial Knowledge Distillation framework for graph models named GraphAKD.
The discriminator distinguishes between teacher knowledge and what the student inherits, while the student GNN works as a generator and aims to fool the discriminator.
The results imply that GraphAKD can precisely transfer knowledge from a complicated teacher GNN to a compact student GNN.
arXiv Detail & Related papers (2022-05-24T00:04:43Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Inferential SIR-GN: Scalable Graph Representation Learning [0.4699313647907615]
Graph representation learning methods generate numerical vector representations for the nodes in a network.
In this work, we propose Inferential SIR-GN, a model which is pre-trained on random graphs, then computes node representations rapidly.
We demonstrate that the model is able to capture node's structural role information, and show excellent performance at node and graph classification tasks, on unseen networks.
arXiv Detail & Related papers (2021-11-08T20:56:37Z) - Deep Structured Instance Graph for Distilling Object Detectors [82.16270736573176]
We present a simple knowledge structure to exploit and encode information inside the detection system to facilitate detector knowledge distillation.
We achieve new state-of-the-art results on the challenging COCO object detection task with diverse student-teacher pairs on both one- and two-stage detectors.
arXiv Detail & Related papers (2021-09-27T08:26:00Z) - Uniting Heterogeneity, Inductiveness, and Efficiency for Graph
Representation Learning [68.97378785686723]
graph neural networks (GNNs) have greatly advanced the performance of node representation learning on graphs.
A majority class of GNNs are only designed for homogeneous graphs, leading to inferior adaptivity to the more informative heterogeneous graphs.
We propose a novel inductive, meta path-free message passing scheme that packs up heterogeneous node features with their associated edges from both low- and high-order neighbor nodes.
arXiv Detail & Related papers (2021-04-04T23:31:39Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - Pointer Graph Networks [48.44209547013781]
Graph neural networks (GNNs) are typically applied to static graphs that are assumed to be known upfront.
Pointer Graph Networks (PGNs) augment sets or graphs with additional inferred edges for improved model generalisation ability.
PGNs allow each node to dynamically point to another node, followed by message passing over these pointers.
arXiv Detail & Related papers (2020-06-11T12:52:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.