p2pGNN: A Decentralized Graph Neural Network for Node Classification in
Peer-to-Peer Networks
- URL: http://arxiv.org/abs/2111.14837v1
- Date: Mon, 29 Nov 2021 12:21:47 GMT
- Title: p2pGNN: A Decentralized Graph Neural Network for Node Classification in
Peer-to-Peer Networks
- Authors: Emmanouil Krasanakis, Symeon Papadopoulos, Ioannis Kompatsiaris
- Abstract summary: We aim to classify nodes of unstructured peer-to-peer networks with communication uncertainty, such as users of decentralized social networks.
We employ decoupled Graph Neural Networks (GNNs) to solve this problem.
We develop an asynchronous decentralized formulation of diffusion that converges at the same predictions linearly with respect to communication rate.
- Score: 15.164084925877624
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this work, we aim to classify nodes of unstructured peer-to-peer networks
with communication uncertainty, such as users of decentralized social networks.
Graph Neural Networks (GNNs) are known to improve the accuracy of simpler
classifiers in centralized settings by leveraging naturally occurring network
links, but graph convolutional layers are challenging to implement in
decentralized settings when node neighbors are not constantly available. We
address this problem by employing decoupled GNNs, where base classifier
predictions and errors are diffused through graphs after training. For these,
we deploy pre-trained and gossip-trained base classifiers and implement
peer-to-peer graph diffusion under communication uncertainty. In particular, we
develop an asynchronous decentralized formulation of diffusion that converges
at the same predictions linearly with respect to communication rate. We
experiment on three real-world graphs with node features and labels and
simulate peer-to-peer networks with uniformly random communication frequencies;
given a portion of known labels, our decentralized graph diffusion achieves
comparable accuracy to centralized GNNs.
Related papers
- Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Domain-adaptive Message Passing Graph Neural Network [67.35534058138387]
Cross-network node classification (CNNC) aims to classify nodes in a label-deficient target network by transferring the knowledge from a source network with abundant labels.
We propose a domain-adaptive message passing graph neural network (DM-GNN), which integrates graph neural network (GNN) with conditional adversarial domain adaptation.
arXiv Detail & Related papers (2023-08-31T05:26:08Z) - Semi-decentralized Inference in Heterogeneous Graph Neural Networks for
Traffic Demand Forecasting: An Edge-Computing Approach [35.0857568908058]
graph neural networks (GNNs) have been shown promising for prediction of taxi service demand and supply.
We propose a semi-decentralized approach utilizing multiple cloudlets, moderately sized storage and computation devices.
Also, we propose a heterogeneous GNN-LSTM algorithm for improved taxi-level demand and supply forecasting.
arXiv Detail & Related papers (2023-02-28T00:21:18Z) - Scalable Neural Network Training over Distributed Graphs [45.151244961817454]
Realworld graph data must often be stored across many machines just because capacity constraints.
Network communication is costly and becomes main bottleneck to train GNNs.
First framework that can be used to train GNNs at all network decentralization levels.
arXiv Detail & Related papers (2023-02-25T10:42:34Z) - SSSNET: Semi-Supervised Signed Network Clustering [4.895808607591299]
We introduce a novel probabilistic balanced normalized cut loss for training nodes in a GNN framework for semi-supervised signed network clustering, called SSSNET.
The main novelty approach is a new take on the role of social balance theory for signed network embeddings.
arXiv Detail & Related papers (2021-10-13T10:36:37Z) - Graph Belief Propagation Networks [34.137798598227874]
We introduce a model that combines the advantages of graph neural networks and collective classification.
In our model, potentials on each node only depend on that node's features, and edge potentials are learned via a coupling matrix.
Our approach can be viewed as either an interpretable message-passing graph neural network or a collective classification method with higher capacity and modernized training.
arXiv Detail & Related papers (2021-06-06T05:24:06Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z) - Quantized Decentralized Stochastic Learning over Directed Graphs [52.94011236627326]
We consider a decentralized learning problem where data points are distributed among computing nodes communicating over a directed graph.
As the model size gets large, decentralized learning faces a major bottleneck that is the communication load due to each node transmitting messages (model updates) to its neighbors.
We propose the quantized decentralized learning algorithm over directed graphs that is based on the push-sum algorithm in decentralized consensus optimization.
arXiv Detail & Related papers (2020-02-23T18:25:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.