Fast Graph Condensation with Structure-based Neural Tangent Kernel
- URL: http://arxiv.org/abs/2310.11046v2
- Date: Fri, 1 Mar 2024 06:41:43 GMT
- Title: Fast Graph Condensation with Structure-based Neural Tangent Kernel
- Authors: Lin Wang, Wenqi Fan, Jiatong Li, Yao Ma, Qing Li
- Abstract summary: We propose a novel dataset condensation framework (GC-SNTK) for graph-structured data.
A Structure-based Neural Tangent Kernel (SNTK) is developed to capture the topology of graph and serves as the kernel function in KRR paradigm.
Experiments demonstrate the effectiveness of our proposed model in accelerating graph condensation while maintaining high prediction performance.
- Score: 30.098666399404287
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid development of Internet technology has given rise to a vast amount
of graph-structured data. Graph Neural Networks (GNNs), as an effective method
for various graph mining tasks, incurs substantial computational resource costs
when dealing with large-scale graph data. A data-centric manner solution is
proposed to condense the large graph dataset into a smaller one without
sacrificing the predictive performance of GNNs. However, existing efforts
condense graph-structured data through a computational intensive bi-level
optimization architecture also suffer from massive computation costs. In this
paper, we propose reforming the graph condensation problem as a Kernel Ridge
Regression (KRR) task instead of iteratively training GNNs in the inner loop of
bi-level optimization. More specifically, We propose a novel dataset
condensation framework (GC-SNTK) for graph-structured data, where a
Structure-based Neural Tangent Kernel (SNTK) is developed to capture the
topology of graph and serves as the kernel function in KRR paradigm.
Comprehensive experiments demonstrate the effectiveness of our proposed model
in accelerating graph condensation while maintaining high prediction
performance. The source code is available on
https://github.com/WANGLin0126/GCSNTK.
Related papers
- Greener GRASS: Enhancing GNNs with Encoding, Rewiring, and Attention [12.409982249220812]
We introduce Graph Attention with Structures (GRASS), a novel GNN architecture, to enhance graph relative attention.
GRASS rewires the input graph by superimposing a random regular graph to achieve long-range information propagation.
It also employs a novel additive attention mechanism tailored for graph-structured data.
arXiv Detail & Related papers (2024-07-08T06:21:56Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Graph Data Condensation via Self-expressive Graph Structure Reconstruction [7.4525875528900665]
We introduce a novel framework named textbfGraph Data textbfCondensation via textbfSelf-expressive Graph Structure textbfReconstruction.
Our method explicitly incorporates the original graph structure into the condensing process and captures the nuanced interdependencies between the condensed nodes.
arXiv Detail & Related papers (2024-03-12T03:54:25Z) - Structure-free Graph Condensation: From Large-scale Graphs to Condensed
Graph-free Data [91.27527985415007]
Existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph.
We advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set.
arXiv Detail & Related papers (2023-06-05T07:53:52Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Adaptive Kernel Graph Neural Network [21.863238974404474]
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data.
In this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN)
AKGNN learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
Experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN.
arXiv Detail & Related papers (2021-12-08T20:23:58Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Adaptive Graph Auto-Encoder for General Data Clustering [90.8576971748142]
Graph-based clustering plays an important role in the clustering area.
Recent studies about graph convolution neural networks have achieved impressive success on graph type data.
We propose a graph auto-encoder for general data clustering, which constructs the graph adaptively according to the generative perspective of graphs.
arXiv Detail & Related papers (2020-02-20T10:11:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.