Graph Condensation for Graph Neural Networks
- URL: http://arxiv.org/abs/2110.07580v1
- Date: Thu, 14 Oct 2021 17:42:14 GMT
- Title: Graph Condensation for Graph Neural Networks
- Authors: Wei Jin, Lingxiao Zhao, Shichang Zhang, Yozen Liu, Jiliang Tang, Neil
Shah
- Abstract summary: We study the problem of graph condensation for graph neural networks (GNNs)
We aim to condense the large, original graph into a small, synthetic and highly-informative graph.
We are able to approximate the original test accuracy by 95.3% on Reddit, 99.8% on Flickr and 99.0% on Citeseer.
- Score: 34.4899280207043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the prevalence of large-scale graphs in real-world applications, the
storage and time for training neural models have raised increasing concerns. To
alleviate the concerns, we propose and study the problem of graph condensation
for graph neural networks (GNNs). Specifically, we aim to condense the large,
original graph into a small, synthetic and highly-informative graph, such that
GNNs trained on the small graph and large graph have comparable performance. We
approach the condensation problem by imitating the GNN training trajectory on
the original graph through the optimization of a gradient matching loss and
design a strategy to condense node futures and structural information
simultaneously. Extensive experiments have demonstrated the effectiveness of
the proposed framework in condensing different graph datasets into informative
smaller graphs. In particular, we are able to approximate the original test
accuracy by 95.3% on Reddit, 99.8% on Flickr and 99.0% on Citeseer, while
reducing their graph size by more than 99.9%, and the condensed graphs can be
used to train various GNN architectures.
Related papers
- TinyGraph: Joint Feature and Node Condensation for Graph Neural Networks [14.8325651280105]
Training graph neural networks (GNNs) on large-scale graphs can be challenging due to the high computational expense.
Existing graph condensation studies tackle this problem only by reducing the number of nodes in the graph.
We propose a novel framework, TinyGraph, to condense features and nodes simultaneously in graphs.
arXiv Detail & Related papers (2024-07-10T21:54:12Z) - A Topology-aware Graph Coarsening Framework for Continual Graph Learning [8.136809136959302]
Continual learning on graphs tackles the problem of training a graph neural network (GNN) where graph data arrive in a streaming fashion.
Traditional continual learning strategies such as Experience Replay can be adapted to streaming graphs.
We propose TA$mathbbCO$, a (t)opology-(a)ware graph (co)arsening and (co)ntinual learning framework.
arXiv Detail & Related papers (2024-01-05T22:22:13Z) - Fast Graph Condensation with Structure-based Neural Tangent Kernel [30.098666399404287]
We propose a novel dataset condensation framework (GC-SNTK) for graph-structured data.
A Structure-based Neural Tangent Kernel (SNTK) is developed to capture the topology of graph and serves as the kernel function in KRR paradigm.
Experiments demonstrate the effectiveness of our proposed model in accelerating graph condensation while maintaining high prediction performance.
arXiv Detail & Related papers (2023-10-17T07:25:59Z) - Graph Condensation for Inductive Node Representation Learning [59.76374128436873]
We propose mapping-aware graph condensation (MCond)
MCond integrates new nodes into the synthetic graph for inductive representation learning.
On the Reddit dataset, MCond achieves up to 121.5x inference speedup and 55.9x reduction in storage requirements.
arXiv Detail & Related papers (2023-07-29T12:11:14Z) - Structure-free Graph Condensation: From Large-scale Graphs to Condensed
Graph-free Data [91.27527985415007]
Existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph.
We advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set.
arXiv Detail & Related papers (2023-06-05T07:53:52Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Scaling R-GCN Training with Graph Summarization [71.06855946732296]
Training of Relation Graph Convolutional Networks (R-GCN) does not scale well with the size of the graph.
In this work, we experiment with the use of graph summarization techniques to compress the graph.
We obtain reasonable results on the AIFB, MUTAG and AM datasets.
arXiv Detail & Related papers (2022-03-05T00:28:43Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Graph Coarsening with Neural Networks [8.407217618651536]
We propose a framework for measuring the quality of coarsening algorithm and show that depending on the goal, we need to carefully choose the Laplace operator on the coarse graph.
Motivated by the observation that the current choice of edge weight for the coarse graph may be sub-optimal, we parametrize the weight assignment map with graph neural networks and train it to improve the coarsening quality in an unsupervised way.
arXiv Detail & Related papers (2021-02-02T06:50:07Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.