Wide and Deep Graph Neural Network with Distributed Online Learning
- URL: http://arxiv.org/abs/2107.09203v1
- Date: Mon, 19 Jul 2021 23:56:48 GMT
- Title: Wide and Deep Graph Neural Network with Distributed Online Learning
- Authors: Zhan Gao, Fernando Gama, Alejandro Ribeiro
- Abstract summary: Graph neural networks (GNNs) are naturally distributed architectures for learning representations from network data.
Online learning can be leveraged to retrain GNNs at testing time to overcome this issue.
This paper develops the Wide and Deep GNN (WD-GNN), a novel architecture that can be updated with distributed online learning mechanisms.
- Score: 174.8221510182559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) are naturally distributed architectures for
learning representations from network data. This renders them suitable
candidates for decentralized tasks. In these scenarios, the underlying graph
often changes with time due to link failures or topology variations, creating a
mismatch between the graphs on which GNNs were trained and the ones on which
they are tested. Online learning can be leveraged to retrain GNNs at testing
time to overcome this issue. However, most online algorithms are centralized
and usually offer guarantees only on convex problems, which GNNs rarely lead
to. This paper develops the Wide and Deep GNN (WD-GNN), a novel architecture
that can be updated with distributed online learning mechanisms. The WD-GNN
consists of two components: the wide part is a linear graph filter and the deep
part is a nonlinear GNN. At training time, the joint wide and deep architecture
learns nonlinear representations from data. At testing time, the wide, linear
part is retrained, while the deep, nonlinear one remains fixed. This often
leads to a convex formulation. We further propose a distributed online learning
algorithm that can be implemented in a decentralized setting. We also show the
stability of the WD-GNN to changes of the underlying graph and analyze the
convergence of the proposed online learning procedure. Experiments on movie
recommendation, source localization and robot swarm control corroborate
theoretical findings and show the potential of the WD-GNN for distributed
online learning.
Related papers
- The Evolution of Distributed Systems for Graph Neural Networks and their
Origin in Graph Processing and Deep Learning: A Survey [17.746899445454048]
Graph Neural Networks (GNNs) are an emerging research field.
GNNs can be applied to various domains including recommendation systems, computer vision, natural language processing, biology and chemistry.
We aim to fill this gap by summarizing and categorizing important methods and techniques for large-scale GNN solutions.
arXiv Detail & Related papers (2023-05-23T09:22:33Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural
Networks [13.965982814292971]
Graph Neural Networks (GNNs) are the first choice methods for graph machine learning problems.
Centralizing a massive amount of real-world graph data for GNN training is prohibitive due to user-side privacy concerns.
This work proposes SpreadGNN, a novel multi-task federated training framework.
arXiv Detail & Related papers (2021-06-04T22:20:47Z) - Graph-Free Knowledge Distillation for Graph Neural Networks [30.38128029453977]
We propose the first dedicated approach to distilling knowledge from a graph neural network without graph data.
The proposed graph-free KD (GFKD) learns graph topology structures for knowledge transfer by modeling them with multinomial distribution.
We provide the strategies for handling different types of prior knowledge in the graph data or the GNNs.
arXiv Detail & Related papers (2021-05-16T21:38:24Z) - FedGraphNN: A Federated Learning System and Benchmark for Graph Neural
Networks [68.64678614325193]
Graph Neural Network (GNN) research is rapidly growing thanks to the capacity of GNNs to learn representations from graph-structured data.
Centralizing a massive amount of real-world graph data for GNN training is prohibitive due to user-side privacy concerns.
We introduce FedGraphNN, an open research federated learning system and a benchmark to facilitate GNN-based FL research.
arXiv Detail & Related papers (2021-04-14T22:11:35Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Wide and Deep Graph Neural Networks with Distributed Online Learning [175.96910854433574]
Graph neural networks (GNNs) learn representations from network data with naturally distributed architectures.
Online learning can be used to retrain GNNs at testing time, overcoming this issue.
This paper proposes the Wide and Deep GNN (WD-GNN), a novel architecture that can be easily updated with distributed online learning mechanisms.
arXiv Detail & Related papers (2020-06-11T12:48:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.