Learning Graph Neural Networks using Exact Compression
- URL: http://arxiv.org/abs/2304.14793v1
- Date: Fri, 28 Apr 2023 12:04:28 GMT
- Title: Learning Graph Neural Networks using Exact Compression
- Authors: Jeroen Bollen, Jasper Steegmans, Jan Van den Bussche, Stijn
Vansummeren
- Abstract summary: We study exact compression as a way to reduce the memory requirements of learning GNNs on large graphs.
In particular, we adopt a formal approach to compression and propose a methodology that transforms GNN learning problems into provably equivalent compressed GNN learning problems.
- Score: 2.213723689024101
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) are a form of deep learning that enable a wide
range of machine learning applications on graph-structured data. The learning
of GNNs, however, is known to pose challenges for memory-constrained devices
such as GPUs. In this paper, we study exact compression as a way to reduce the
memory requirements of learning GNNs on large graphs. In particular, we adopt a
formal approach to compression and propose a methodology that transforms GNN
learning problems into provably equivalent compressed GNN learning problems. In
a preliminary experimental evaluation, we give insights into the compression
ratios that can be obtained on real-world graphs and apply our methodology to
an existing GNN benchmark.
Related papers
- On the Computational Capability of Graph Neural Networks: A Circuit Complexity Bound Perspective [28.497567290882355]
Graph Neural Networks (GNNs) have become the standard approach for learning and reasoning over relational data.
This paper explores the computational limitations of GNNs through the lens of circuit complexity.
Specifically, we analyze the circuit complexity of common GNN architectures and prove that under constraints of constant-depth layers, linear or sublinear embedding sizes, and precision, GNNs cannot solve key problems such as graph connectivity and graph isomorphism.
arXiv Detail & Related papers (2025-01-11T05:54:10Z) - Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity [30.2972965458946]
Graph Networks (GNNs) are widely applied to graph learning problems such as node classification.
When scaling up the underlying graphs of GNNs to a larger size, we are forced to either train on the complete graph or keep the full graph adjacency and node embeddings in memory.
This paper proposes a sketch-based algorithm whose training time and memory grow sublinearly with respect to graph size.
arXiv Detail & Related papers (2024-06-21T18:22:11Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - How Graph Neural Networks Learn: Lessons from Training Dynamics [80.41778059014393]
We study the training dynamics in function space of graph neural networks (GNNs)
We find that the gradient descent optimization of GNNs implicitly leverages the graph structure to update the learned function.
This finding offers new interpretable insights into when and why the learned GNN functions generalize.
arXiv Detail & Related papers (2023-10-08T10:19:56Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Graph-Free Knowledge Distillation for Graph Neural Networks [30.38128029453977]
We propose the first dedicated approach to distilling knowledge from a graph neural network without graph data.
The proposed graph-free KD (GFKD) learns graph topology structures for knowledge transfer by modeling them with multinomial distribution.
We provide the strategies for handling different types of prior knowledge in the graph data or the GNNs.
arXiv Detail & Related papers (2021-05-16T21:38:24Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Graph Neural Networks for Motion Planning [108.51253840181677]
We present two techniques, GNNs over dense fixed graphs for low-dimensional problems and sampling-based GNNs for high-dimensional problems.
We examine the ability of a GNN to tackle planning problems such as identifying critical nodes or learning the sampling distribution in Rapidly-exploring Random Trees (RRT)
Experiments with critical sampling, a pendulum and a six DoF robot arm show GNNs improve on traditional analytic methods as well as learning approaches using fully-connected or convolutional neural networks.
arXiv Detail & Related papers (2020-06-11T08:19:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.