Graph Condensation for Inductive Node Representation Learning
- URL: http://arxiv.org/abs/2307.15967v2
- Date: Sat, 9 Dec 2023 05:46:16 GMT
- Title: Graph Condensation for Inductive Node Representation Learning
- Authors: Xinyi Gao, Tong Chen, Yilong Zang, Wentao Zhang, Quoc Viet Hung
Nguyen, Kai Zheng, Hongzhi Yin
- Abstract summary: We propose mapping-aware graph condensation (MCond)
MCond integrates new nodes into the synthetic graph for inductive representation learning.
On the Reddit dataset, MCond achieves up to 121.5x inference speedup and 55.9x reduction in storage requirements.
- Score: 59.76374128436873
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) encounter significant computational challenges
when handling large-scale graphs, which severely restricts their efficacy
across diverse applications. To address this limitation, graph condensation has
emerged as a promising technique, which constructs a small synthetic graph for
efficiently training GNNs while retaining performance. However, due to the
topology structure among nodes, graph condensation is limited to condensing
only the observed training nodes and their corresponding structure, thus
lacking the ability to effectively handle the unseen data. Consequently, the
original large graph is still required in the inference stage to perform
message passing to inductive nodes, resulting in substantial computational
demands. To overcome this issue, we propose mapping-aware graph condensation
(MCond), explicitly learning the one-to-many node mapping from original nodes
to synthetic nodes to seamlessly integrate new nodes into the synthetic graph
for inductive representation learning. This enables direct information
propagation on the synthetic graph, which is much more efficient than on the
original large graph. Specifically, MCond employs an alternating optimization
scheme with innovative loss terms from transductive and inductive perspectives,
facilitating the mutual promotion between graph condensation and node mapping
learning. Extensive experiments demonstrate the efficacy of our approach in
inductive inference. On the Reddit dataset, MCond achieves up to 121.5x
inference speedup and 55.9x reduction in storage requirements compared with
counterparts based on the original graph.
Related papers
- Disentangled Condensation for Large-scale Graphs [31.781721873508978]
Graph condensation has emerged as an intriguing technique to save the expensive training costs of Graph Neural Networks (GNNs)
We propose to disentangle the condensation process into a two-stage GNN-free paradigm, independently condensing nodes and generating edges.
This simple yet effective approach achieves at least 10 times faster than state-of-the-art methods with comparable accuracy on medium-scale graphs.
arXiv Detail & Related papers (2024-01-18T09:59:00Z) - GraphRARE: Reinforcement Learning Enhanced Graph Neural Network with Relative Entropy [21.553180564868306]
GraphRARE is a framework built upon node relative entropy and deep reinforcement learning.
An innovative node relative entropy is used to measure mutual information between node pairs.
A deep reinforcement learning-based algorithm is developed to optimize the graph topology.
arXiv Detail & Related papers (2023-12-15T11:30:18Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Structure-free Graph Condensation: From Large-scale Graphs to Condensed
Graph-free Data [91.27527985415007]
Existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph.
We advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set.
arXiv Detail & Related papers (2023-06-05T07:53:52Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - A Graph Data Augmentation Strategy with Entropy Preserving [11.886325179121226]
We introduce a novel graph entropy definition as a quantitative index to evaluate feature information among a graph.
Under considerations of preserving graph entropy, we propose an effective strategy to generate training data using a perturbed mechanism.
Our proposed approach significantly enhances the robustness and generalization ability of GCNs during the training process.
arXiv Detail & Related papers (2021-07-13T12:58:32Z) - Self-Supervised Graph Learning with Proximity-based Views and Channel
Contrast [4.761137180081091]
Graph neural networks (GNNs) use neighborhood aggregation as a core component that results in feature smoothing among nodes in proximity.
To tackle this problem, we strengthen the graph with two additional graph views, in which nodes are directly linked to those with the most similar features or local structures.
We propose a method that aims to maximize the agreement between representations across generated views and the original graph.
arXiv Detail & Related papers (2021-06-07T15:38:36Z) - Uniting Heterogeneity, Inductiveness, and Efficiency for Graph
Representation Learning [68.97378785686723]
graph neural networks (GNNs) have greatly advanced the performance of node representation learning on graphs.
A majority class of GNNs are only designed for homogeneous graphs, leading to inferior adaptivity to the more informative heterogeneous graphs.
We propose a novel inductive, meta path-free message passing scheme that packs up heterogeneous node features with their associated edges from both low- and high-order neighbor nodes.
arXiv Detail & Related papers (2021-04-04T23:31:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.