FedGCN: Convergence-Communication Tradeoffs in Federated Training of
Graph Convolutional Networks
- URL: http://arxiv.org/abs/2201.12433v7
- Date: Mon, 18 Dec 2023 05:12:49 GMT
- Title: FedGCN: Convergence-Communication Tradeoffs in Federated Training of
Graph Convolutional Networks
- Authors: Yuhang Yao, Weizhao Jin, Srivatsan Ravi, Carlee Joe-Wong
- Abstract summary: We introduce the Federated Graph Convolutional Network (FedGCN) algorithm, which uses federated learning to train GCN models for semi-supervised node classification.
Compared to prior methods that require extra communication among clients at each training round, FedGCN clients only communicate with the central server in one pre-training step.
Experimental results show that our FedGCN algorithm achieves better model accuracy with 51.7% faster convergence on average and at least 100X less communication compared to prior work.
- Score: 14.824579000821272
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Methods for training models on graphs distributed across multiple clients
have recently grown in popularity, due to the size of these graphs as well as
regulations on keeping data where it is generated. However, the cross-client
edges naturally exist among clients. Thus, distributed methods for training a
model on a single graph incur either significant communication overhead between
clients or a loss of available information to the training. We introduce the
Federated Graph Convolutional Network (FedGCN) algorithm, which uses federated
learning to train GCN models for semi-supervised node classification with fast
convergence and little communication. Compared to prior methods that require
extra communication among clients at each training round, FedGCN clients only
communicate with the central server in one pre-training step, greatly reducing
communication costs and allowing the use of homomorphic encryption to further
enhance privacy. We theoretically analyze the tradeoff between FedGCN's
convergence rate and communication cost under different data distributions.
Experimental results show that our FedGCN algorithm achieves better model
accuracy with 51.7% faster convergence on average and at least 100X less
communication compared to prior work.
Related papers
- Distributed Training of Large Graph Neural Networks with Variable Communication Rates [71.7293735221656]
Training Graph Neural Networks (GNNs) on large graphs presents unique challenges due to the large memory and computing requirements.
Distributed GNN training, where the graph is partitioned across multiple machines, is a common approach to training GNNs on large graphs.
We introduce a variable compression scheme for reducing the communication volume in distributed GNN training without compromising the accuracy of the learned model.
arXiv Detail & Related papers (2024-06-25T14:57:38Z) - Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - Distributed Learning over Networks with Graph-Attention-Based
Personalization [49.90052709285814]
We propose a graph-based personalized algorithm (GATTA) for distributed deep learning.
In particular, the personalized model in each agent is composed of a global part and a node-specific part.
By treating each agent as one node in a graph the node-specific parameters as its features, the benefits of the graph attention mechanism can be inherited.
arXiv Detail & Related papers (2023-05-22T13:48:30Z) - SalientGrads: Sparse Models for Communication Efficient and Data Aware
Distributed Federated Training [1.0413504599164103]
Federated learning (FL) enables the training of a model leveraging decentralized data in client sites while preserving privacy by not collecting data.
One of the significant challenges of FL is limited computation and low communication bandwidth in resource limited edge client nodes.
We propose Salient Grads, which simplifies the process of sparse training by choosing a data aware subnetwork before training.
arXiv Detail & Related papers (2023-04-15T06:46:37Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - ResFed: Communication Efficient Federated Learning by Transmitting Deep
Compressed Residuals [24.13593410107805]
Federated learning enables cooperative training among massively distributed clients by sharing their learned local model parameters.
We introduce a residual-based federated learning framework (ResFed), where residuals rather than model parameters are transmitted in communication networks for training.
By employing a common prediction rule, both locally and globally updated models are always fully recoverable in clients and the server.
arXiv Detail & Related papers (2022-12-11T20:34:52Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z) - CatFedAvg: Optimising Communication-efficiency and Classification
Accuracy in Federated Learning [2.2172881631608456]
We introduce a new family of Federated Learning algorithms called CatFedAvg.
It improves the communication efficiency but improves the quality of learning using a category coverage inNIST strategy.
Our experiments show that an increase of 10% absolute points accuracy using the M dataset with 70% absolute points lower network transfer over FedAvg.
arXiv Detail & Related papers (2020-11-14T06:52:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.