ASFGNN: Automated Separated-Federated Graph Neural Network
- URL: http://arxiv.org/abs/2011.03248v1
- Date: Fri, 6 Nov 2020 09:21:34 GMT
- Title: ASFGNN: Automated Separated-Federated Graph Neural Network
- Authors: Longfei Zheng, Jun Zhou, Chaochao Chen, Bingzhe Wu, Li Wang, Benyu
Zhang
- Abstract summary: We propose an automated Separated-Federated Graph Neural Network (ASFGNN) learning paradigm.
We conduct experiments on benchmark datasets and the results demonstrate that ASFGNN significantly outperforms the naive federated GNN.
- Score: 17.817867271722093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have achieved remarkable performance by taking
advantage of graph data. The success of GNN models always depends on rich
features and adjacent relationships. However, in practice, such data are
usually isolated by different data owners (clients) and thus are likely to be
Non-Independent and Identically Distributed (Non-IID). Meanwhile, considering
the limited network status of data owners, hyper-parameters optimization for
collaborative learning approaches is time-consuming in data isolation
scenarios. To address these problems, we propose an Automated
Separated-Federated Graph Neural Network (ASFGNN) learning paradigm. ASFGNN
consists of two main components, i.e., the training of GNN and the tuning of
hyper-parameters. Specifically, to solve the data Non-IID problem, we first
propose a separated-federated GNN learning model, which decouples the training
of GNN into two parts: the message passing part that is done by clients
separately, and the loss computing part that is learnt by clients federally. To
handle the time-consuming parameter tuning problem, we leverage Bayesian
optimization technique to automatically tune the hyper-parameters of all the
clients. We conduct experiments on benchmark datasets and the results
demonstrate that ASFGNN significantly outperforms the naive federated GNN, in
terms of both accuracy and parameter-tuning efficiency.
Related papers
- Unleash Graph Neural Networks from Heavy Tuning [33.948899558876604]
Graph Neural Networks (GNNs) are deep-learning architectures designed for graph-type data.
We propose a graph conditional latent diffusion framework (GNN-Diff) to generate high-performing GNNs directly by learning from checkpoints saved during a light-tuning coarse search.
arXiv Detail & Related papers (2024-05-21T06:23:47Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Graph Neural Network for Large-Scale Network Localization [35.29322617956428]
Graph neural networks (GNNs) are popular to use for classifying structured data in the context of machine learning.
In this work, we adopt GNN for a classic but challenging nonlinear regression problem, namely the network localization.
Our main findings are in order. First, GNN is potentially the best solution to large-scale network localization in terms of accuracy, robustness and computational time.
arXiv Detail & Related papers (2020-10-22T12:39:26Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z) - Vertically Federated Graph Neural Network for Privacy-Preserving Node
Classification [39.53937689989282]
VFGNN is a learning paradigm for privacy-preserving node classification task under data vertically partitioned setting.
We leave the private data related computations on data holders, and delegate the rest of computations to a semi-honest server.
We conduct experiments on three benchmarks and the results demonstrate the effectiveness of VFGNN.
arXiv Detail & Related papers (2020-05-25T03:12:18Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.