Uniting Heterogeneity, Inductiveness, and Efficiency for Graph
Representation Learning
- URL: http://arxiv.org/abs/2104.01711v1
- Date: Sun, 4 Apr 2021 23:31:39 GMT
- Title: Uniting Heterogeneity, Inductiveness, and Efficiency for Graph
Representation Learning
- Authors: Tong Chen, Hongzhi Yin, Jie Ren, Zi Huang, Xiangliang Zhang, Hao Wang
- Abstract summary: graph neural networks (GNNs) have greatly advanced the performance of node representation learning on graphs.
A majority class of GNNs are only designed for homogeneous graphs, leading to inferior adaptivity to the more informative heterogeneous graphs.
We propose a novel inductive, meta path-free message passing scheme that packs up heterogeneous node features with their associated edges from both low- and high-order neighbor nodes.
- Score: 68.97378785686723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the ubiquitous graph-structured data in various applications, models
that can learn compact but expressive vector representations of nodes have
become highly desirable. Recently, bearing the message passing paradigm, graph
neural networks (GNNs) have greatly advanced the performance of node
representation learning on graphs. However, a majority class of GNNs are only
designed for homogeneous graphs, leading to inferior adaptivity to the more
informative heterogeneous graphs with various types of nodes and edges. Also,
despite the necessity of inductively producing representations for completely
new nodes (e.g., in streaming scenarios), few heterogeneous GNNs can bypass the
transductive learning scheme where all nodes must be known during training.
Furthermore, the training efficiency of most heterogeneous GNNs has been
hindered by their sophisticated designs for extracting the semantics associated
with each meta path or relation. In this paper, we propose WIde and DEep
message passing Network (WIDEN) to cope with the aforementioned problems about
heterogeneity, inductiveness, and efficiency that are rarely investigated
together in graph representation learning. In WIDEN, we propose a novel
inductive, meta path-free message passing scheme that packs up heterogeneous
node features with their associated edges from both low- and high-order
neighbor nodes. To further improve the training efficiency, we innovatively
present an active downsampling strategy that drops unimportant neighbor nodes
to facilitate faster information propagation. Experiments on three real-world
heterogeneous graphs have further validated the efficacy of WIDEN on both
transductive and inductive node representation learning, as well as the
superior training efficiency against state-of-the-art baselines.
Related papers
- Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network [4.078409998614025]
Heterophily, nodes with different labels tend to be connected based on semantic meanings, Graph Neural Networks (GNNs) often exhibit suboptimal performance.
We propose and demonstrate that the valuable semantic information inherent in heterophily can be utilized effectively in graph learning.
We propose HiGNN, an innovative approach that constructs an additional new graph structure, that integrates heterophilous information by leveraging node distribution.
arXiv Detail & Related papers (2024-03-26T03:29:42Z) - GraphRARE: Reinforcement Learning Enhanced Graph Neural Network with Relative Entropy [21.553180564868306]
GraphRARE is a framework built upon node relative entropy and deep reinforcement learning.
An innovative node relative entropy is used to measure mutual information between node pairs.
A deep reinforcement learning-based algorithm is developed to optimize the graph topology.
arXiv Detail & Related papers (2023-12-15T11:30:18Z) - Breaking the Entanglement of Homophily and Heterophily in
Semi-supervised Node Classification [25.831508778029097]
We introduce AMUD, which quantifies the relationship between node profiles and topology from a statistical perspective.
We also propose ADPA as a new directed graph learning paradigm for AMUD.
arXiv Detail & Related papers (2023-12-07T07:54:11Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Graph Condensation for Inductive Node Representation Learning [59.76374128436873]
We propose mapping-aware graph condensation (MCond)
MCond integrates new nodes into the synthetic graph for inductive representation learning.
On the Reddit dataset, MCond achieves up to 121.5x inference speedup and 55.9x reduction in storage requirements.
arXiv Detail & Related papers (2023-07-29T12:11:14Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.